DOI QR코드

DOI QR Code

A Multimodal Fusion Method Based on a Rotation Invariant Hierarchical Model for Finger-based Recognition

  • Zhong, Zhen (College of Information Technology Engineering, Tianjin University of Technology and Education) ;
  • Gao, Wanlin (College of Information and Electrical Engineering, China Agricultural University) ;
  • Wang, Minjuan (College of Information and Electrical Engineering, China Agricultural University)
  • Received : 2019.12.17
  • Accepted : 2021.01.14
  • Published : 2021.01.31

Abstract

Multimodal biometric-based recognition has been an active topic because of its higher convenience in recent years. Due to high user convenience of finger, finger-based personal identification has been widely used in practice. Hence, taking Finger-Print (FP), Finger-Vein (FV) and Finger-Knuckle-Print (FKP) as the ingredients of characteristic, their feature representation were helpful for improving the universality and reliability in identification. To usefully fuse the multimodal finger-features together, a new robust representation algorithm was proposed based on hierarchical model. Firstly, to obtain more robust features, the feature maps were obtained by Gabor magnitude feature coding and then described by Local Binary Pattern (LBP). Secondly, the LGBP-based feature maps were processed hierarchically in bottom-up mode by variable rectangle and circle granules, respectively. Finally, the intension of each granule was represented by Local-invariant Gray Features (LGFs) and called Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs). Experiment results revealed that the proposed algorithm is capable of improving rotation variation of finger-pose, and achieving lower Equal Error Rate (EER) in our homemade database.

Keywords

1. Introduction

Nowadays, to enhance the recognition rate of biometrics recognition research, multimodal biometric-based recognition has been a hot topic. Fusing two or more biometrics together has been widely used in multimodal biometric system [1,2]. Due to its high user acceptance and convenience, Finger-Print (FP) [3-5], Finger-Vein (FV) [6-8], Finger-Knuckle-Print (FKP) [9-11] were viewed as the research objects in this paper. To reliably fuse the multimodal finger feature together, the feature analysis has been a principal step.

Recently, a lot of feature representation methods have been proposed to achieve higher accuracy recognition. To effectively represent the image feature, a 2DGabor filter was proposed by exploiting the texture information in multi-orientation and multi-scale [12,13]. To solve affine transformation, partial occlusion and rotate scaling problems, a Scale-invariant Feature Transform (SIFT) descriptor was proposed [14], which can achieve more robust representation method. However, above methods could not well represent feature in case of illumination and rotation variation. To solve the illumination variation problem, a Local Binary Pattern (LBP) descriptor was proposed [15,16]. To further improve illumination invariance performance of feature descriptor, a Gabor Ordinal Measure (GOM) descriptor was proposed, which combined Gabor wavelets with ordinal filters [17]. To improve the robustness of illumination and translation variations, a Local Gabor Binary Pattern (LGBP) was proposed, which combined the magnitude feature with Gabor filter and LBP feature [18,19]. Although the above these feature descriptors had achieved good effect on deformation, they were not effective in the case of variable finger-pose. To solve rotation variation problem, a Multisport Region Rotation and Intensity Monotonic Invariant Descriptor (MRRID) was proposed by region-level intensity analysis [20]. Nevertheless, it still has two shortcomings in representing three-modality finger-based features. Firstly, owing to the few number of interest points, the process of obtaining interest points is inappropriate in finger-based feature maps. This necessarily impairs the correct of finger-based recognition. Besides, the MRRID has the performance of rotation invariance only in representing feature based on local image.

To effectively solve the problem of finger-pose variation, a novel robust intension representation method was proposed based on hierarchical model and named Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs). Firstly, to enhance the illumination and translation invariance of finger features, the magnitude features of FP, FV, FKP are extracted by Gabor filter with even-symmetric, then LBP is adopt to obtain LGBP coding. Secondly, finger-feature maps are processed hierarchically by variable rectangle and circle granules in bottom-up mode, respectively. Finally, to improve the rotation invariant performance, the intension of feature granules are represented by modified MRRID. Experiments show that the proposed algorithm has better rotation invariance result in the problem of finger-pose variation and achieves higher recognition rate in a homemade database. The flow diagram of the proposed algorithm is revealed in Fig. 1. The contributions of this paper and the advantages of our proposed algorithm are summarized as follows:

E1KOBZ_2021_v15n1_131_f0001.png 이미지

Fig. 1. The Flowchart of the Proposed Method

• A hierarchical model of finger-feature rectangle granulation was designed based on LGBP descriptor, which can achieve higher accuracy and efficiency than the finger feature maps with more rectangle granules in the uni-layer.

• A hierarchical model of finger-feature circle granulation was designed based on LGBP descriptor, which can achieve higher efficiency than rectangle granulation based on LGBP descriptor.

• A robust intension representation method of finger feature granules with pose invariance was proposed by combining the hierarchical model with rectangle and circle granulation and local-invariant gray features, which named Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs).

• The proposed finger-based feature representation descriptor can effectively improve the rotation invariance in the condition of variable finger-pose.

The structure of our paper is: methodology is introduced in section 2, the proposed method is presented in section 3, experiment results is provided in section 4, discussion is described in section 5, conclusion and future works are reported in section 6.

2. Methodology

2.1 Multimodal Finger Images Acquisition

The finger images acquisition has been a main problem in biometric identification technology. Due to the difference of imaging modes and texture features among three modalities of finger, the bi-spectral imaging with different bands is used to acquire FPs, FVs and FKPs, respectively. Moreover, to ensure finger-pose consistency between imaged unimodal images, multimodal finger images are captured automatically and simultaneously by the homemade imaging system. The imaging principle and equipment are revealed in Fig. 2(a), and the region of interest (ROI) samples of multimodal with same finger-pose are obtained and shown in Fig. 2(b) [21].

E1KOBZ_2021_v15n1_131_f0002.png 이미지

Fig. 2. A Homemade Database Acquisition Equipment

To determine the position of center pixel in granules, both the number of feature granules (FGs) in each layer and the number of pixels in each FG are odd. To ensure the number of granules based on multimodal ROI images are same in each layer, FV, FKP and FP images are resized in 99×207, 99×207, 153×153, respectively.

2.2 Global Feature Extraction

Due to the adjustable orientation and center frequency of Gabor filter, the texture features with various orientations of multimodal finger images are described, which enhance the line structure [22]. Firstly, the magnitude finger-based feature maps with eight orientations (0°, 22.5°, 45°, 67.5°, 90°, 112.5°, 135° and 157.5°) [23] are obtained by even-symmetrical Gabor filter, as shown in Fig. 3(a). The formula of even-symmetrical Gabor filter is shown as

E1KOBZ_2021_v15n1_131_f0003.png 이미지

Fig. 3. The Multimodal Finger-based LGBP Feature Coding

\(G(x, y)=\frac{\gamma}{2 \pi \sigma^{2}} \exp \left\{-\frac{1}{2}\left(\frac{x_{\theta_{k}}^{2}+\gamma^{2} y_{\theta_{k}}^{2}}{\sigma^{2}}\right)\right\} \cos \left(2 \pi f_{k} x_{\theta_{k}}\right)\)       (1)

Where,\(\left[\begin{array}{l} x_{\theta_{k}} \\ y_{\theta_{k}} \end{array}\right]=\left[\begin{array}{cc} \cos \theta_{k} & \sin \theta_{k} \\ -\sin \theta_{k} & \cos \theta_{k} \end{array}\right]\left[\begin{array}{l} x \\ y \end{array}\right]\), fk ​​​​​​is the center frequency of the kth orientation, θk is the angle of the \(k\)th orientation, σ and γ are the scale of Gabor filter and the length-width ratio of envelope, respectively. In this paper, σ=4, 5, 6 based on multimodal finger images, namely FP, FV and FKP, respectively, and γ=1.

Then, to extract the finger-based feature, which has no illumination and translation variation, the Gabor magnitude maps of FP, FV, FKP are respectively encoded by LBP descriptor [24], and the LGBP feature maps with eight orientations are shown in Fig. 3(b).

\(L B P\left(x_{c}, y_{c}\right)=\sum_{p=0}^{7} S\left(f\left(x_{p}, y_{p}\right)-f\left(x_{c}, y_{c}\right)\right) 2^{p}\)       (2)

Where, \(S(A)=\left\{\begin{array}{ll} 1, & A \geq 0 \\ 0, & A<0 \end{array}\right.\), \(\left(x_{p}, y_{p}\right)\) is the eight neighbors of the center pixel located at (xc, yc). f(·) is the maps of Gabor magnitude with different orientations.

2.3 Local Feature Extraction

In order to improve rotation invariance, the local gray feature vectors are formed as the following procedure:

Step 1: gray-based grouping.

The intensities of pixels are sorted in non-descending order, and divided into k groups based on the number of the pixels [20].

Step 2: obtaining local gray feature vector.

The eight nearest adjacent points of each pixel are regularly extracted. A 4-bin binary vector is obtained by computing difference value of opposite adjacent pixels and shown in following formula:

\(\begin{array}{l} \left(\operatorname{sign}\left(I\left(X_{i}^{4+4}\right)-I\left(X_{i}^{4}\right)\right), \operatorname{sign}\left(I\left(X_{i}^{3+4}\right)-I\left(X_{i}^{3}\right)\right), \operatorname{sign}\left(I\left(X_{i}^{2+4}\right)-I\left(X_{i}^{2}\right)\right),\\\right. \operatorname{sign}\left(I\left(X_{i}^{1+4}\right)-I\left(X_{i}^{1}\right)\right) \end{array}\)       (3)

Where, I(.) is the intensity of pixel, Xi is the center pixel, \(X_{i}^{j} j=1,2, \ldots ., 8\) are the eight nearest neighboring points.

A 16-bin gray feature vector can be calculated by mapping 4-bin binary vector into 16-bin binary vector:

\(f_{j}=\left\{\begin{array}{l} 1, \text { if } \sum_{m=1}^{4} \operatorname{sign}\left(I\left(X_{i}^{m+4}\right)-I\left(X_{i}^{m}\right)\right) \times 2^{m-1}=j-1 \\ 0, \text { otherwise } \end{array}, j=1,2, \ldots, 2^{4}\right.\)       (4)

Where, fj represents 16-bin gray feature vector, and it has only one element that is 1. Then, all the gray feature vectors are accumulated by adding in each gray-based group. Finally, the added feature vectors are concatenated together to obtain the feature histogram.

3. The Proposed Method

Feature extraction is essential in constructing original feature domain. Due to the illumination and translation invariance of LGBP descriptor, the feature maps of multimodal finger images are obtained by LGBP descriptor, which combined the magnitude feature of Gabor filter and LBP coding. As Table 1, the performance of LGBP descriptor is the best among four descriptors. However, its efficiency needs to be improved.

Table 1. The Comparison of Different Methods

E1KOBZ_2021_v15n1_131_t0001.png 이미지

3.1 Finger-based Feature Extraction

To overcome rotation variance, MRRID is used based on LGBP descriptor. However, MRRID only has no rotation variance in local feature analysis. So modified MRRID is adopt and the finger-based feature extraction steps could be summarized as following:

Firstly, the LGBP maps of finger-based three-modality are obtained by Eqs. (1)-(2).

Then, to overcome the disadvantage of MRRID, the LGBP maps of finger-based three-modality are divided into the same number of granules with different shapes and scales by a hierarchical model with bottom-up mode, respectively. In this paper, the structure of 3-layer bottom-up granules is constructed, which is granulated in granules with different number and revealed in Fig. 4(a). Owing to the few number of interest points in finger-granules (FGs), the MRRID is modified by viewing each pixel as an interest point, which is named Local-invariant Gray Feature (LGF). The feature histogram of each FG is obtained by method in section 2.3. The constructed feature histogram of each FG is shown in Fig. 4(b), and called Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs).

E1KOBZ_2021_v15n1_131_f0004.png 이미지

Fig. 4. The Multimodal Finger-based Granule Features

3.2 Finger-based Feature Matching

The processing of finger-based feature matching is achieved by the hierarchical model with top-down manner, which is shown in Fig. 4(a). In the matching process, the finger-based feature is matched from coarse-grained granules to fine-grained granules. If the feature histograms of coarse-grained granules are not matched, the matching process is stopped. Due to higher efficiency of coarse-grained granules matching, it has some advantages in matching efficiency.

4. Experimental Results

The homemade database totally contains 3000 sets of FP, FV, FKP, respectively, which includes 300 people and 10 samples are obtained by each person. It has two following traits, the one is that the same finger-based ROI images are collected in different time, which can ensure finger-pose variation between acquisition times, the other is that three-modal images of a finger are obtained in same time, which can ensure finger-pose invariance between modals.

4.1 The Analysis of Poses Reliability

To prove the rotation invariance of the proposed method, four FV images with variable poses are selected and shown in Fig. 5. They belong to one person. Moreover, due to only considering feature representation performance of proposed descriptor and not considering feature matching performance, uni-level LGGIFs is used for comparison.

E1KOBZ_2021_v15n1_131_f0005.png 이미지

Fig. 5. The Variable Pose of FV Images

Because of the final finger-based feature is represented by histogram, the histogram intersection method [25] is adopt for measuring the similarity of feature histograms with different finger-pose.

\(\operatorname{sim}\left(m_{1}, m_{2}\right)=\frac{\sum_{i=1}^{L} \min \left[H_{m_{1}}(l), H_{m_{2}}(l)\right]}{\sum_{l=1}^{L} H_{m_{1}}(l)}\)       (5)

Where, Hm1(.), Hm2(.) represent matching feature histograms, respectively, L represents the histogram dimension.

As Table 2, the proposed descriptor can effectively deal with the issue of different finger-pose by comparing it with some existing methods.

Table 2. The Similarity Coefficient of Histogram

E1KOBZ_2021_v15n1_131_t0002.png 이미지

4.2 The Evaluating Indicators

It is found that Receiver Operating Characteristic Curve (ROC) is widely used in matching algorithms for testing performance index. It combines False Acceptance Rate (FAR) with False Rejection Rate (FRR) and reflects the relationship between finger recognition system evaluation indexes. Here, FAR represents that the finger characteristics of different individuals are considered to be the finger characteristics of the same individual. FRR represents that the finger characteristics of the same individual are considered to be the finger characteristics of different individuals. They reflect both the reliability and practicability of the system. The point corresponding to the equivalence between FAR and FRR is the optimal point in the identification system, which is named Equal Error Rate (EER). The smaller value is, the more robust performance of the identification system.

\(F A R=\frac{\text { Inter }_{-} S}{\text { All InterClass }}\)       (6)

\(F R R=\frac{\text { Inner }_{-} S}{\text { All }_{\text {- }} \text { InnerClass }}\)       (7)

Where, Inner_S denotes the number of matching on same individual’s finger image, Inter_S denotes the number of matching on different individuals’ finger image, All_InnerClass denotes the number of matching for all samples of same individual, All_InterClass denotes the number of matching for all samples of different individuals.

\(\text { All }_{-} \text {InnerClass }=\text { ClassNum } \times \text { Count } \times(\text { Count }-1)\)       (8)

\(\text { All_ InterClass }=\text { ClassNum } \times(\text { ClassNum }-1) \times \text { Count } \times \text { Count }\)       (9)

Where, ClassNum represents the number of individual, Count represents the number of multimodal finger image of one person.

4.3 The Parameter Selection

In the processing of finger-based recognition method based on multi-layer, the similarity measure thresholds of higher layer are selected when FAR is smallest and FRR is zero, so the thresholds of the second layer and the third layer are respectively set to 0.6447 and 0.6564 based on rectangle granules, and the thresholds of the second layer and the third layer are respectively set to 0.8004 and 0.8149 based on circle granules.

In the process of robust representation, the parameters would impair the recognition performance of finger-feature representation, which are the number of gray-based groups (k) and finger-feature granules (N), respectively. To conveniently obtain the center of circle granules, the N value in each layer is set as 9×9, 3×3, 1×1, respectively. The comparison of different k with the same N is shown in Fig. 6, the figure shows that the finger-feature recognition in each layer achieves better in k=4, 6, 8, respectively.

E1KOBZ_2021_v15n1_131_f0006.png 이미지

Fig. 6. The Compare Results of Different k in Three Layers

4.4 The Recognition Results

To verify the rotation invariance performance of the finger-based representation method, the finger images are obtained by 300 people and 10 variable poses based on each person in homemade database. Moreover, the proposed method is implemented using MATLAB R2014a.

The identification result of proposed algorithm is revealed in Fig. 7, the figure reveals that identification performance of proposed method is best among three methods. Moreover, the matching performances of the different feature representation algorithm are placed in Table 3. Further, the matching results of the proposed method with rectangle and circle granules are listed in Table 4 and Table 5. From the tables, we can see that the EERs of HLGGIF with rectangle granulation and with circle granulation are reduced by 0.974% and 1.392%, which has better recognition performance in accuracy than uni-level finger-feature recognition, and also showed that the matching time are reduced 0.08s and 0.024s, respectively.

E1KOBZ_2021_v15n1_131_f0007.png 이미지

Fig. 7. The Recognition Results

Table 3. The Matching Recognition Performances of Different Description Method with Rectangle Granulation

E1KOBZ_2021_v15n1_131_t0003.png 이미지

Table 4. Matching Results from Proposed Algorithm with Rectangle Granulation

E1KOBZ_2021_v15n1_131_t0004.png 이미지

Table 5. Matching Results from Proposed Algorithm with Circle Granulation

E1KOBZ_2021_v15n1_131_t0005.png 이미지

5. Discussion

In the previous studies, the FGs with rectangle and circle shape were adopt to effectively represent the feature structure of the fingers [26]. The method showed that the rectangle and circle granulation structure can effectively represent the finger feature and improve the efficiency of matching. However, the used representations of FG were sensitive to rotation variation, it had no ability to solve the problem of variable poses in fingers. The comparison of different multilevel algorithms with same homemade database are list in Table 6. From the table, we can see that the proposed method with rectangle granulation has best identification performance among these algorithms.

Table 6. The Comparison of Different Methods with Multilevel Structure

E1KOBZ_2021_v15n1_131_t0006.png 이미지

To effectively solve the finger-pose variation, some representation methods of FGs were proposed in our previous study. The recognition rate of proposed feature fusion method was better than Gabor orientation coding [27]. The results showed that the proposed method can effectively improve the finger-pose rotation invariance. However, it had higher illumination variation. To further improve performance of descriptor, a novel feature representation method was proposed by Gabor magnitude feature and ordinal feature coding [28].

\(\text { Ordinal }=C_{p} \sum_{i=1}^{N_{p}} \frac{1}{\sqrt{2 \pi \sigma_{p i}}} \exp \left[\frac{-\left(X-\omega_{p i}\right)^{2}}{2 \sigma_{p i}^{2}}\right]-C_{n} \sum_{j=1}^{N_{n}} \frac{1}{\sqrt{2 \pi \sigma_{n j}}} \exp \left[\frac{-\left(X-\omega_{n j}\right)^{2}}{2 \sigma_{n j}^{2}}\right]\)       (10)

Where ω and δ are the central position and the scale of ordinal filter, respectively. Np and Nn are the number of positive and negative lobe, respectively. Cn and Cp are balance coefficients, which is used to satisfy CpNp = CnNn. Due to more stable of difference filter with three lobes, Cp=1, Np=2; Cn=2, Nn=1 [17].

Because of LGBP is more robust than GOM, the proposed algorithm is better than GOM in dealing with illumination and translation invariance of finger images, as shown in Fig. 8(a). Because of the circle granulation is more robustness than the rectangle granulation in rotation variability, the finger-feature representation result with circle granulation based on LGBP should be better than it with rectangle granulation. However, the proposed result with rectangle granulation is better than it with circle granulation in the practice, as shown in Fig. 8(b). The preliminary studies indicate that some pixels are not described by the proposed algorithm, because of they are not included in circle granules, as shown in Fig. 8(c).

E1KOBZ_2021_v15n1_131_f0008.png 이미지

Fig. 8. The Comparison with Results

Due to the inner regions of finger with less sensitivity to pose variation, the parameters of 2D-Gaussian modal were used for weighing each FG in uni-model finger-based images [29]. The experiment results showed that the weighted intension representation method of FGs was better than un-weighted method in matching accuracy. The coefficients were obtained by following formula. However, the matching efficiency of weighted method was lower than un-weighted method.

\(\alpha(i, j)=\frac{1}{2 \pi \sigma^{2}} e^{-\frac{\left.-[i-\operatorname{mid}(i))^{2}+(j-\operatorname{mid}(j))^{2}\right]}{2 \sigma^{2}}}, i=1,2, \ldots, M, j=1,2, \ldots, K\)       (11)

Where, M and K are respectively the number of granules in rows and columns, mid(i) and mid(j) denote the center granule in ith row and jth column, respectively.

As the above analysis, the weighing coefficients should be used in each layer of FGs in the future. Although it can increase the matching time, the hierarchical structure of feature fusion can improve the matching efficiency, and the weighing coefficients have less influence on the effectiveness of matching processing. Moreover, the structure of convolutional neural network should be considered to improve the self-adaptability of the proposed hierarchical model.

6. Conclusion and Future Works

To solve the issue that finger-pose is prone to variation during imaging, a novel finger-based feature representation method is proposed in this paper. First, FP, FV, FKP maps are obtained by even-symmetrical Gabor filter, respectively. Then, the finger feature maps are described by LBP and processed hierarchically in bottom-up mode by variable rectangle and circle granules, respectively. Finally, to improve the rotation invariance, the intension of each granule is described by LGF and named Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs). The EERs of HLGGIF with rectangle granulation and with circle granulation are reduced by 0.409% and 0.357%, moreover, their efficiency are only reduced 0.056s and 0.023s. Experiments show that the proposed algorithm has a preferable performance in enhancing the finger-based recognition accuracy.

This work mainly focuses on solving the issue of finger-pose variable, the parameters of proposed method are not selected automatically for supreme recognition performance. Future studies may focus on the automatic selection of parameters, which can replace the process of parameter determination with empirical values. Moreover, the proposed method should be further improved in terms of time consumption and algorithm accuracy.

Acknowledgement

The authors would like to thank their colleagues for their support of this work. The detailed comments from the anonymous reviewers were gratefully acknowledged. This work was supported by the National Key Research and Development Program (Grant No. 2016YFD0200600-2016YFD0200602) and the Liquor Making Biological Technology and Application of Key labortatory of Sichuan Provinve (NJ2019-02)

References

  1. A. Jain, A. Ross, and S. Prabhakar, "An introduction to biometric recognition," IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 4-20, 2004.
  2. M. Sultana, P. P. Paul, and M. L. Gavrilova, "Social behavioral information fusion in multimodal biometrics," IEEE Transactions on Systems Man & Cybernetics Systems, vol. 48, no. 99, pp. 2176-2187, 2017.
  3. A. K. Jain, Y. Chen, and M. Demirkus, "Pores and Ridges: High-Resolution Fingerprint Matching Using Level 3 Features," IEEE Transactions on Pattern Analysis and Machine Intellignece, vol. 29, pp. 15-27, 2007. https://doi.org/10.1109/TPAMI.2007.250596
  4. K. J. Wang, H. Ma, F. X. Guan, and X. F. Li, "Dual-modal decision fusion for fingerprint and finger vein recognition based on image capture quality evaluation," Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, vol. 25, no. 4, pp. 669-675, 2012.
  5. X. Guo, E. Zhu, and J. Yin, "A fast and accurate method for detecting fingerprint reference point," Neural Computing and Applications, vol. 29, no. 1, pp. 21-31, 2018. https://doi.org/10.1007/s00521-016-2285-9
  6. J. Yang, Y. Shi, and J. Yang, "Personal identification based on finger-vein features," Computers in Human Behavior, vol. 27, no. 5, pp. 1565-1570, 2011. https://doi.org/10.1016/j.chb.2010.10.029
  7. F. Liu, G. Yang, Y. Yin, and S. Wang, "Singular value decomposition based minutiae matching method for finger vein recognition," Neurocomputing, vol. 145, pp. 75-89, 2014. https://doi.org/10.1016/j.neucom.2014.05.069
  8. W. F. Aswad, S. K. Guirguis, and M. Z. Rashad, "Investigation of efficiency of using minutiae detection method for finger vein recognition and matching," International Journal of Computer Applications, vol. 114, no. 10, pp. 15-19, 2015. https://doi.org/10.5120/20014-1985
  9. L. Zhang, L. Zhang, D. Zhang, and H. Zhu, "Ensemble of local and global information for finger-knuckle-print recognition," Pattern Recognition, vol. 44, no. 9, pp. 1990-1998, 2011. https://doi.org/10.1016/j.patcog.2010.06.007
  10. M. Hanmandlu and J. Grover, "Feature selection for finger knuckle print-based multimodal biometric system," International Journal of Computer Applications, vol. 38, no. 10, pp. 27-33, 2012. https://doi.org/10.5120/4645-6905
  11. P. F. Yu, H. Zhou, and H. Y. Li, "Personal identification using finger-knuckle-print based on local binary pattern," Applied Mechanics and Materials, vol. 441, pp. 703-706, 2013. https://doi.org/10.4028/www.scientific.net/AMM.441.703
  12. P. N. Loxley, "The two-dimensional gabor function adapted to natural image statistics: a model of simple-cell receptive fields and sparse structure in images," Neural Computation, vol. 29, no. 10, pp. 2769-2799, 2017. https://doi.org/10.1162/neco_a_00997
  13. F. Aziz, H Arof, N. Mokhtar, M. Mubin, and M. S. Abu Talip, "Rotation invariant bin detection and solid waste level classification," Measurement, vol. 65, pp. 19-28, 2015. https://doi.org/10.1016/j.measurement.2014.12.027
  14. A. Skowron, J. Stepaniuk, and R. Swiniarski, "Modeling rough granular computing based on approximation spaces," Information Sciences, vol. 184, no. 1, pp. 20-43, 2012. https://doi.org/10.1016/j.ins.2011.08.001
  15. R. Davarzani, S. Mozaffari, and K. Yaghmaie, "Scale- and rotation-invariant texture description with improved local binary pattern features," Signal Processing, vol. 111, pp. 274-293, 2015. https://doi.org/10.1016/j.sigpro.2014.11.005
  16. L. Liu, S. Lao, P. Fieguth, Y. Guo, X. Wang, and M. Pietikainen, "Median robust extended local binary pattern for texture classification," IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1368-1381, 2016. https://doi.org/10.1109/TIP.2016.2522378
  17. I. Tajouri, W. Aydi, A. Ghorbel, and N. Masmoudi, "Efficient iris texture analysis method based on gabor ordinal measures," Journal of Electronic Imaging, vol. 26, no. 4, 2017.
  18. L. Liu, P. Fieguth, G. Zhao, M. Pietikainen, and D. Hu, "Extended local binary patterns for face recognition," Information Sciences, vol. 358, pp. 56-72, 2016. https://doi.org/10.1016/j.ins.2016.04.021
  19. Z. M. Li, Z. H. Huang, and T. Zhang, "Gabor-scale binary pattern for face recognition," International Journal of Wavelets Multiresolution and Information Processing, vol. 14, no. 5, 2016.
  20. B. Fan, F. Wu, and Z. Hu, "Rotationally invariant descriptors using intensity order pooling," IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 34, no. 10, pp. 2031-2045, 2012. https://doi.org/10.1109/TPAMI.2011.277
  21. L. Zhang, L. Zhang, and D. Zhang, "Finger-knuckle-print: A new biometric identifier," in Proc. of the 16th IEEE International Conference on Image Processing, pp. 1981-1984, 2009.
  22. K. Shin, Y. Park, D. Nguyen, and K. Park, "Finger-vein image enhancement using a fuzzy-based fusion method with gabor and retinex filtering," Sensors, vol. 14, no. 2, pp. 3095-3129, 2014. https://doi.org/10.3390/s140203095
  23. M. Wang, D. Tang, and Z. Chen, "Finger vein roi extraction based on robust edge detection and flexible sliding window," International Journal of Pattern Recognition and Artificial Intelligence, vol. 32, no. 4, 2017.
  24. W. C. Zhang, S. G. Shan, and X. L. Chen, and W. Gao, "Local gabor binary patterns based on mutual information for face recognition," International Journal of Image & Graphics, vol. 7, no. 4, pp. 777-793, 2007. https://doi.org/10.1142/S021946780700291X
  25. W. Wieclawek, "Information granules in image histogram analysis," Computerized Medical Imaging & Graphics the Official Journal of the Computerized Medical Imaging Society, vol. 65, pp. 129-141, 2018. https://doi.org/10.1016/j.compmedimag.2017.05.003
  26. J. Yang, Z. Zhong, G. Jia, and Y. Li, "Spatial circular granulation method based on multimodal finger feature," Journal of Electrical and Computer Engineering, vol. 2016, pp. 1-7, 2016.
  27. Z. N. Lu, Z. Zhong, G. Jia, Y. Shi, and J. Yang, "A Research for Multimodal Finger-Feature Fusion Method Based on Gabor Coding," Journal of Signal Processing, vol. 31, no. 11, pp. 1467-1472, 2015.
  28. Z. Zhong, G. Jia, Y. Shi, and J. Yang, "A Finger-based Recognition Method with Insensitivity to Pose Invariance," in Proc. of Chinese Conference on Biometric Recognition, 2015.
  29. Y. Shi, Z. Zhong, and J. Yang, "A New Finger Feature Fusion Method Based on Local Gabor Binary Pattern," in Proc. of Chinese Conference on Biometric, 2016.