DOI QR코드

DOI QR Code

Video Palmprint Recognition System Based on Modified Double-line-single-point Assisted Placement

  • Wu, Tengfei (School of Software, Nanchang Hangkong University) ;
  • Leng, Lu (School of Software, Nanchang Hangkong University)
  • Received : 2021.03.20
  • Accepted : 2021.03.24
  • Published : 2021.03.31

Abstract

Palmprint has become a popular biometric modality; however, palmprint recognition has not been conducted in video media. Video palmprint recognition (VPR) has some advantages that are absent in image palmprint recognition. In VPR, the registration and recognition can be automatically implemented without users' manual manipulation. A good-quality image can be selected from the video frames or generated from the fusion of multiple video frames. VPR in contactless mode overcomes several problems caused by contact mode; however, contactless mode, especially mobile mode, encounters with several revere challenges. Double-line-single-point (DLSP) assisted placement technique can overcome the challenges as well as effectively reduce the localization error and computation complexity. This paper modifies DLSP technique to reduce the invalid area in the frames. In addition, the valid frames, in which users place their hands correctly, are selected according to finger gap judgement, and then some key frames, which have good quality, are selected from the valid frames as the gallery samples that are matched with the query samples for authentication decision. The VPR algorithm is conducted on the system designed and developed on mobile device.

Keywords

Ⅰ. INTRODUCTION

Accurate user authentication and authorization control are the key functions on Internet. The possessions for authentication can be stolen, broken, or forged; the passwords for authentication can be forgotten or attacked. Biometric recognition realizes authentication based on the inherent physical or behavioral characteristics of human beings [1]; which overcomes the drawbacks of the traditional authentication technologies [2].

Palmprint has rich discriminative features, including main lines, wrinkles, ridges and minutiae points. In addition, palmprint recognition can have low equipment requirements, fast speed, and high accuracy; therefore, it is widely deemed as an important biometric modality.

Video palmprint recognition (VPR) has some advantages that are absent in image palmprint recognition. In VPR, the registration and recognition can be automatically implemented without users’ manual manipulation. A good-quality image can be selected from the video frames or generated from the fusion of multiple video frames [3]. VPR in contactless mode overcomes the problems caused by contact mode.

● Health risk: Due to the health and personal safety, it is unhygienic for users to contact the identical sensors or devices. Contact acquisition definitely increases the risk of infectious diseases, such as COVID-19.

● Low acquisition flexibility: Contact acquisition suppresses the users’ acceptance, flexibility and comfortability.

● Surface contamination: The surface of contact sensors can be contaminated easily, especially in harsh, dirty, and outdoor environments. The surface contamination of contact sensors is likely to degrade the quality of the subsequent acquired biometric images. In addition, the biometric features can be retained on the surface of contact sensors, which increases the risk of leaking biometric privacy.

● Resistance from traditional customs: Some conservative nations/nationalities resist the users of different genders to contact the identical devices.

Although VPR in contactless mode overcomes the aforementioned problems, it encounters with several revere challenges such as complex background, various illuminations, uncontrollable hand placement (pose and location)[4].

This paper develops practical VPR algorithm and system. The main contributions are summarized as follows.

(1) Double-line-single-point (DLSP) assisted placement technique can overcome the challenges as well as effectively reduce the localization error and computation complexity. Modified DLSP (MDLSP) technique is developed to reduce the invalid area in the frames.

(2) The valid frames, in which users place their hands correctly, are selected according to finger gap judgement, and then some key frames, which have good quality, are selected from the valid frames as the gallery samples that are matched with the query samples for authentication decision.

(3) The VPR algorithm is conducted on the system designed and developed on mobile device.

The rest of this paper is organized as follows. Section 2 revisits the related works. Section 3 specifies the methodology. Section 4 are the experiments and discussions. Finally, the conclusions are drawn in Section 5.

Ⅱ. RELATED WORKS

2.1. Preprocessing

The purpose of palmprint image preprocessing is typically to accurately segment, localize and crop the region of interest (ROI) [5]. Palmprint recognition can be divided into contact mode, contactless mode and mobile mode. Mobile mode is a special case of contactless mode, which can be considered as the most difficult contactless mode. The preprocessing on contact mode is easier than those on the other two modes. Table 1 summarizes the palmprint preprocessing methods in contactless mode.

Table 1 Contactless palmprint preprocessing.

2.2. Feature Extraction

Palmprint recognition methods can be briefly divided into five categories [16], namely coding-based methods, structure-based methods, subspace-based methods [17], statistics-based methods [18], deep-learning/machine- learning based methods. Fusion technologies are also been used in palmprint recognition [19,20]. Because coding-based methods can be free from training and have low storage and computational complexity, they are suitable for edge computing devices such as mobile phones with low-performance hardware. Therefore, coding-based methods, including PalmCode [21], OrdinalCode[22], FusionCode[23], CompCode[24], RLOC[25], BOCV[26], E-BOCV[27], DCC[28], DRCC[28], DOC[29], are employed in our VPR system.

Ⅲ. METHODOLOGY

The key frame selection in VPR system is shown as in Figure 1. The valid frames, in which users place their hands correctly, are first selected according to finger gap judgement, and then the key frames, which have good quality, are selected from the valid frames according to quality judgement. Some key frames at registration are used as the gallery samples; while each key frame at authentication is used as a query sample and matched with the gallery samples until the authentication request is approved.

Fig. 1. Key frame selection in VPR system.

3.1. Assisted Placement

DLSP assisted placement technique can overcome the challenges as well as effectively reduce the localization error and computation complexity. However, the assisted lines in DLSP are oblique, so there are large invalid area. In addition, the preprocessing is not easy conducted along oblique directions. Thus MDLSP technique is developed to reduce the invalid area in the frames and facilitate computation. Figure 2 shows the assisted placement interface for left hand in VPR system. The interface can be mirror flipped for right hand. The invalid area is very small, and the computation can be easily conducted along horizontal and vertical directions.

Fig. 2. MDLSP interface.

In MDLSP, there are two horizontal assisted lines (long line CD, short line AB) and one assisted point B (the right end point of line segment AB) on the screen. The point O in the upper left corner is the origin of the coordinate system. The positive directions of the X-axis and Y-axis are horizontally rightward and vertically downward, respectively. The positions of the assisted graphics are defined as follows.

The image is a horizontal screen preview on mobile devices. The width and height are W and H, respectively. When the palm surface is parallel to the interface surface, the upper and lower boundaries of the palm are approximately parallel to AB and CD, respectively. Let the two end points of the upper boundary be A(𝑥𝐴, 𝑦𝐴) and B(𝑥𝐵, 𝑦𝐵), respectively.

\(x_{A}=\frac{7}{15} H, y_{A}=\frac{1}{15} W\).       (1)

\(x_{B}=\frac{9}{15} H, y_{B}=\frac{1}{15} W.\)       (2)

Let the two end points of the lower boundary be C(𝑥𝐶, 𝑦𝐶), D(𝑥𝐷, 𝑦𝐷).

\(x_{C}=\frac{4}{15} H, y_{C}=\frac{14}{15} W\).       (3)

\(x_{D}=\frac{9}{15} H, y_{D}=\frac{14}{15} W\).       (4)

The distance between line segment AB and CD is L.

\(L=y_{D}-y_{B}\).       (5)

L determines the distance from the palm surface to the camera. A user needs to keep his/her palm surface and the lens at a proper distance.

3.2. Hand Placement

A user should place his/her hand according to the following rules.

(1) The four fingers (index, middle, ring and little fingers) are naturally brought together, and the thumb is spread out.

(2) The upper and lower boundary lines of the palm should be aligned with and tangent to the two assisted lines.

(3) Point B should be aligned with the intersection of the upper boundary of the index finger and the bottom line of the index finger.

3.3. Valid Frame selection

The valid frames are selected according to the finger gap judgement [14]. The finger gap is the sub-region of hand, i.e., the shaded rectangle in Figure 2. The location of shaded rectangle is:

Fig. 2. MDLSP interface.

\(\left\{\begin{array}{c} x:\left[x_{B}, x_{B}+L \times \frac{2}{5}\right] \\ y:\left[y_{B}+L / 10, y_{B}+L \times 4 / 5\right] \end{array} .\right.\)       (6)

If the dark rectangle has the finger gap, i.e., the finger gap appears at the correct location in one frame, which demonstrates that the user places his/her hand correctly according to the assisted placement, then this frame is considered as the valid frame. The finger gap processing is shown in Figure 3 and its horizontal integral projection is shown in Figure 4.

Fig. 3. Finger gap processing.

Fig. 4. Horizontal integral projection of finger gap region.

3.4. ROI Localization and Cropping

The shaded area in Figure 5 is the ROI. The location of ROI is:

Fig. 5. ROI localization and cropping.

\(\left\{\begin{array}{l} x:\left[x_{B}-L \times 4 / 5, x_{B}-L / 10\right] \\ y:\left[y_{B}+L / 10, y_{B}+L \times 4 / 5\right] \end{array}\right.\).       (7)

3.5. Key Frame Selection

For the image evaluation without reference, the grayscale variance product is used to evaluate the quality of valid frames, which is defined as:

\(\begin{array}{c} G(f)=\sum_{y} \sum_{x}|f(x, y)-f(x+1, y)| \times \mid f(x, y)- \\ f(x, y+1) \mid . \end{array}\)       (8)

In order to reduce the computation complexity, only the quality of ROI is evaluated. Valid frames have different qualities, as shown in Figure 6. The valid frames, whose gray variance products are higher than β, are selected as the key frames.

Fig. 6. The gray-scale variance product values of valid frames.

3.6. Registration

The system inputs the gallery video, and the ROI set is \(\mathrm{Rr}=\{\mathrm{r} 1, \mathrm{r} 2, \ldots, \mathrm{rn}-1, \mathrm{rn}\}\). The template set generated from ROI set is \(\mathrm{Tr}=\{\mathrm{t} 1, \mathrm{t} 2, \ldots, \mathrm{tn}-1, \mathrm{tn}\}\). Assume that the number of gallery templates is k, the registration distance judgment threshold is hr, and the gallery template set is \(\mathrm{Tr}^{\prime}=\left\{t_{1}^{\prime}\right., \left.t_{2}^{\prime}, \ldots, t_{k}^{\prime}\right\}\), the gallery templates are generated by the following registration algorithm.

Registration algorithm

3.7. Verification

During authentication, the system inputs the authentication video, and the template set of the ROI is \(\mathrm{Tv}=\{\mathrm{t} 1, \mathrm{t} 2, \ldots, \mathrm{tn}-1, \mathrm{tn}\}\), the authentication distance threshold is hv, then the authentication algorithm is:

Verification algorithm

Ⅳ. EXPERIMENTAL RESULTS

4.1. Database

The hand video capture application is shown in Figure 7. We capture totally the videos of 100 left and right hands of 50 persons, each palm has 5 videos. Each video lasts about 6 seconds and its size is about 10MB. The ROIs are cropped from valid frames, as shown in Figure 8.

Fig. 7. Hand video capture APP interface.

Fig. 8. ROI cropping.

4.2. Accuracy

Equal error rate (EER) and decidability index d' are used to evaluate the accuracy. d' is defined as:

\(d^{\prime}=\frac{\left|\mu_{1}-\mu_{2}\right|}{\sqrt{\frac{\sigma_{1}^{2}+\sigma_{1}^{2}}{2}}}\)       (9)

where μ1 and μ2 are the mathematical expectations of intra-class and inter-class normalized Hamming distances, respectively. σ1 and σ2 are the standard deviations of intra-class and inter-class normalized Hamming distances, respectively.

Table 2 shows the EER and d'. Figure 9 shows the ROC curves. The experimental results demonstrate the effectiveness of our VPR algorithm and system.

Table 2. Verification accuracy.

Fig. 9. ROC curves.

Ⅴ. CONCLUSIONS AND FUTURE WORKS

Palmprint recognition in video media is a novel technology. This paper modifies DLSP technique to reduce the invalid area in the frames. In addition, the valid frames are selected according to finger gap judgement, and then some key frames are selected from the valid frames according to image quality as the gallery samples that are matched with the query samples for authentication decision. The VPR algorithm is conducted on the system designed and developed on mobile device. In future, we will further modify the assisted technique or develop other state-of-the-art assisted techniques, and employ other recognition methods, such as deep learning, to improve accuracy.

Acknowledgement

This research was supported by National Natural Science Foundation of China (61866028, 61866025, 61763033), Key Program Project of Research and Development (Jiangxi Provincial Department of Science and Technology) (20203BBGL73222), Innovation Foundation for Postgraduate (YC2019093).

References

  1. D. Jeong, B. G. Kim and S. Y. Dong, "Deep Joint Spatiotemporal Network (DJSTN) for Efficient Facial Expression Recognition," Sensors, vol. 2020, no. 20, p. 1963 (https://doi.org/10.3390/s20071936), March 2020.
  2. J. H. Kim, B. G. Kim, P. P. Roy and D. M. Jeong, "Efficient Facial Expression Recognition Algorithm Based on Hierarchical Deep Neural Network Structure," IEEE Access, vol. 7, no. -, pp. 41273-41285, Mar. 2019. https://doi.org/10.1109/access.2019.2907327
  3. B. G. Kim and D. J. Park, "Unsupervised video object segmentation and tracking based on new edge features," Pattern Recognition Letters, vol. 25, no. 15, pp. 1731-1742 (https://doi.org/10.1016/j.patrec.2004.07.009), Nov. 2004.
  4. L. Leng and A. B. J. Teoh, "Alignment-free row-co-occurrence cancelable palmprint fuzzy vault," Pattern Recognition, vol. 48, no. 7, pp. 2290-2303, Jul. 2015. https://doi.org/10.1016/j.patcog.2015.01.021
  5. B. G. Kim, J. I. Shim and D. J. Park, "Fast image segmentation based on multi-resolution analysis and wavelets," Pattern Recognition Letters, vol. 24, no. 15, pp. 2995-3006, Dec. 2003. https://doi.org/10.1016/S0167-8655(03)00160-0
  6. M. Franzgrote, C. Borg, B. J. T. Ries, S. Bussemaker, X. Jiang, M. Fieleser, and L. Zhang, "Palmprint verification on mobile phones using accelerated competitive code," in Proceeding of 2011 International Conference on Hand-Based Biometrics, Hong Kong, pp. 1-6, Nov. 2011.
  7. S. Aoyama, K. Ito, T Aoki and H. Ota, "A contactless palmprint recognition algorithm for mobile phones," in Proceeding of International Workshop on Advanced Image Technology, pp. 409-413, Jan. 2013.
  8. H. Sang, Y. Ma and J. Huang, "Robust palmprint recognition base on touch-less color palmprint images acquired," Journal of Signal and Information Processing, vol. 4, no. 2, pp. 134-139, Apr. 2013. https://doi.org/10.4236/jsip.2013.42019
  9. M. K. Balwant, A. Agarwal and C. R. Rao, "Online touchless palmprint registration system in a dynamic environment," Procedia Computer Science, vol. 54, pp. 799-808, Jan. 2015. https://doi.org/10.1016/j.procs.2015.06.094
  10. M. Aykut and M. Ekinci, "AAM-based palm segmentation in unrestricted backgrounds and various postures for palmprint recognition," Pattern Recognition Letters, vol. 34, no. 9, pp. 955-962, Jul. 2013. https://doi.org/10.1016/j.patrec.2013.02.016
  11. M. Aykut and M. Ekinci, "Developing a contactless palmprint authentication system by introducing a novel ROI extraction method," Image and Vision Computing, vol. 40, pp. 65-74, Aug. 2015. https://doi.org/10.1016/j.imavis.2015.05.002
  12. M. Gomez-Barreroa, J. Galbally, A. Morales, M. A. Ferrer, J. Fierrez and J. Ortega-Garcia, "A novel hand reconstruction approach and its application to vulnerability assessment," Information Sciences, vol. 268, pp. 103-121, Jun. 2014. https://doi.org/10.1016/j.ins.2013.06.015
  13. J. S. Kim, G. Li, B. Son and J. Kim, "An empirical study of palmprint recognition for mobile phones," IEEE Transactions on Consumer Electronics, vol. 61, no. 3, pp. 311-319, Oct. 2015. https://doi.org/10.1109/TCE.2015.7298090
  14. L. Leng, F. Gao and Q. Chen, "Palmprint recognition system on mobile devices with double-line-single-point assistance," Personal and Ubiquitous Computing, vol. 22, no. 1, pp. 93-104, Feb. 2018. https://doi.org/10.1007/s00779-017-1105-2
  15. F. Gao, K. Cao, L. Leng and Y. Yuan, "Mobile palmprint segmentation based on improved active shape model," Journal of Multimedia Information System, vol. 5, no. 4, pp. 221-228, Dec. 2018. https://doi.org/10.9717/JMIS.2018.5.4.221
  16. D. Zhong, X. Du and K. Zhong, "Decade progress of palmprint recognition: A brief survey," Neurocomputing, vol. 328, pp. 16-28, 2019. https://doi.org/10.1016/j.neucom.2018.03.081
  17. L. Leng, J. Zhang, G. Chen, M. K. Khan and K. Alghathbar, "Two-directional two-dimensional random projection and its variations for face and palmprint recognition," in Proceeding of International conference on computational science and its applications, pp. 458-470, Jun. 2010.
  18. L. Leng, J. Zhang, M. K. Khan, X. Chen and K. Alghathbar, "Dynamic weighted discrimination power analysis in DCT domain for face and palmprint recognition," in Proceeding of International Journal of the Physical Sciences, vol. 5, no. 17, pp. 2543-2554, 2010.
  19. L. Leng, M. Li, C. Kim and X. Bi, "Dual-source discrimination power analysis for multi-instance contactless palmprint recognition," Multimedia Tools and Applications, vol. 76, no. 1, pp. 333-354, Jan. 2017. https://doi.org/10.1007/s11042-015-3058-7
  20. L. Leng and J. Zhang, "Palmhash code vs. palmphasor code," Neurocomputing, vol. 108, pp. 1-12, May 2013. https://doi.org/10.1016/j.neucom.2012.08.028
  21. D. Zhang, W. K. Kong, J. You and M. Wong, "Online palmpint identification," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1041-1050, Sep. 2003. https://doi.org/10.1109/TPAMI.2003.1227981
  22. Z. Sun, T. Tan and Y. Wang, "Ordinal palmprint represention for personal identification," in Proceeding of 2005 IEEE computer society conference on computer vision and pattern recognition, San Diego, USA, pp. 279-284, June. 2005.
  23. A. Kong, D. Zhang and M. Karmel, "Palmprint identification using feature-level fusion," Pattern Recognition, vol. 39, no. 3, pp. 478-487, Mar. 2006. https://doi.org/10.1016/j.patcog.2005.08.014
  24. A. K. Kong and D. Zhang, "Competitive coding scheme for palmprint verification," in Proceeding of the 17th International Conference on Pattern Recognition, vol. 1, pp. 520-523, Aug. 2004.
  25. W. Jia, D. S. Huang and D. Zhang, "Palmprint verification based on robust line orientation code," Pattern Recognition, vol. 41, no. 5, pp. 1504-1513, May. 2008. https://doi.org/10.1016/j.patcog.2007.10.011
  26. Z. Guo, D. Zhang, L. Zhang and W. Zuo, "Palmprint verification using binary orientation co-occurrence vector," Pattern Recognition Letters, vol. 30, no. 13, pp. 1219-1227, Oct. 2009. https://doi.org/10.1016/j.patrec.2009.05.010
  27. L. Zhang, H. Li and J. Niu, "Fragile bits in palmprint recognition," IEEE Signal processing letters, vol. 19, no. 10, pp. 663-666, Oct. 2012. https://doi.org/10.1109/LSP.2012.2211589
  28. Y. Xu, L. Fei, J. Wen and D. Zhang, "Discriminative and robust competitive code for palmprint recognition," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 2, pp. 232-241, Aug. 2018. https://doi.org/10.1109/tsmc.2016.2597291
  29. L. Fei, Y. Xu, W. Tang, and D. Zhang, "Double-orientation code and nonlinear matching scheme for palmprint recognition," Pattern Recognition, vol. 49, pp. 89-101, Jan. 2016. https://doi.org/10.1016/j.patcog.2015.08.001