DOI QR코드

DOI QR Code

A Novel Least Square and Image Rotation based Method for Solving the Inclination Problem of License Plate in Its Camera Captured Image

  • Wu, ChangCheng (Traffic Management Research Institute of the Ministry of Public Security) ;
  • Zhang, Hao (Traffic Management Research Institute of the Ministry of Public Security) ;
  • Hua, JiaFeng (Traffic Management Research Institute of the Ministry of Public Security) ;
  • Hua, Sha (Traffic Management Research Institute of the Ministry of Public Security) ;
  • Zhang, YanYi (Traffic Management Research Institute of the Ministry of Public Security) ;
  • Lu, XiaoMing (Traffic Management Research Institute of the Ministry of Public Security) ;
  • Tang, YiChen (Traffic Management Research Institute of the Ministry of Public Security)
  • Received : 2019.04.08
  • Accepted : 2019.06.03
  • Published : 2019.12.31

Abstract

Recognizing license plate from its traffic camera captured images is one of the most important aspects in many traffic management systems. Despite many sophisticated license plate recognition related algorithms available online, license plate recognition is still a hot research issue because license plates in each country all round the world lack of uniform format and their camera captured images are often affected by multiple adverse factors, such as low resolution, poor illumination effects, installation problem etc. A novel method is proposed in this paper to solve the inclination problem of license plates in their camera captured images through four parts: Firstly, special edge pixels of license plate are chosen to represent main information of license plates. Secondly, least square methods are used to compute the inclined angle of license plates. Then, coordinate rotation methods are used to rotate the license plate. At last, bilinear interpolation methods are used to improve the performance of license plate rotation. Several experimental results demonstrated that our proposed method can solve the inclination problem about license plate in visual aspect and can improve the recognition rate when used as the image preprocessing method.

Keywords

1. Introduction

 License plates permit vehicles traveling on a road within the national territory. Each country all round the world lacks of uniform format. Fig. 1 shows an example about license plate in China [1]-[2]. Kinds include private, training, police, embassy, new energy etc. Sizes include 440cm×220cm, 440cm×140cm, 220cm×140cm, 220cm×140cm etc. Background color include yellow, blue, white, black etc. For characters, English letters are A to Z except for I and O, Arabic numbers are 0 to 9, and Chinese character are mainly the province abbreviations.

E1KOBZ_2019_v13n12_5990_f0001.png 이미지

Fig. 1. Different kinds, sizes and formats about license plates in China

 In order to monitor vehicles on road, several countries have bound lof cameras on the road sections and road intersections to capture license plate images in recent years, bringing the development of several related methods. Palaiahnakote Shivakumara et al. [3] combined the convolutional neural networks and recurrent neural networks for license plate classification and recognition. Alexandre Perez et al. [4] combined four well known classifiers and four multiple classifiers to recognize human action. A. Jalal et al. [5] utilized R transformation of depth silhouettes to recognize human daily activities. Md.Taufeeq Uddin et al. [6] proposed random forests to recognize human activities and postural transitions on the smartphone. AhmadJalal et al. [7] proposed multi-fused features to recognize human activities from continuous sequences of depth map. Faisal Farooq et al. [8] proposed a method to recognize human facial expression by feeding hybrid features to self-organizing maps. Shohei Suzuki et al. [9] proposed four directional feature fields as a feature extraction method for character recognition in nail camera system. Ahmad Jalal et al. [10] proposed depth silhouettes to recognize daily activities when recognizing human home activity. Shaharyar Kamal et al. [11] proposed addresses shape and motion features approach to track and recognize human silhouettes. Karpuravalli Srinivas Raghunandan et al. [12] proposed symmetry features based on stroke width to classify license plate. Adnan Farooq et al. [13] proposed a feature structured framework to track and recognize different activities of multiple user. Maxim Karpushin et al. [14] proposed to use depth information to derive local projective transformations and compute descriptor patches from the texture image.

 In summary, all above discussed algorithms have their own characteristics, which provide good results in their respective environments. However, recognizing license plates from their camera captured images is still a hot research issue because license plates in each country all round the world lack of uniform format and their camera captured images are often affected by multiple adverse factors, such as low resolution, poor illumination effects, installation problem etc. In this work, we mainly consider the inclination problem of license plate, namely, license plate is inclined when compared with the horizontal direction, due to the reason of vehicle lane derivation, poor camera installation etc. Considering license plates separated from their camera captured images, our proposed method solve the inclination problem of license plates through four parts: Firstly, special edge pixels of license plates are chosen to represent main information of license plate. Secondly, least square methods are used to compute the inclined angle about license plate [15]-[16]. Then, coordinate rotation methods are used to rotate the license plate [17]-[18]. At last, bilinear interpolation methods are used to improve the performance of license plate rotation. Several experimental results demonstrated that our proposed method can solve the inclination problem of license plate in visual aspect and can improve the recognition rate when used as the image preprocessing method.

 This paper is organized as follows. Firstly, least square based method and coordinate rotation based method are described in Section 2. Then, a novel method is proposed in Section 3 to solve the inclination problem of license plate. Section 4 demonstrates the experimental results. Finally, the conclusion is described in Section5.

 

2. Overviews of least square method and coordinate rotation method

2.1 Least square method

 As shown in Fig. 2, let X-Y  denote a rectangular coordinate system and (Xk,Yk) : k=1,2,..., m-1 m denote coordinates of m  points arbitrarily distributed on both sides of a straight line  y=ax+b , where a and b are the line slope and intercept respectively. Least square method (LSM), a straight line fitting method, is mainly used to obtain parameters a and b of y=ax+b  by outputting the minimum sum of error square between yk   and (axk+b)  through the following steps:

 1. Defining the objective function S(a,b) :

\(S(a, b)=\sum_{k=1}^{m}\left[\left(a x_{k}+b\right)-y_{k}\right]^{2}\)       (1)

 2. Deriving  S(a,b)  to obtain line parameters a and b respectively:

\(\left\{\begin{array}{l}\frac{\partial S(a, b)}{\partial(a)}=\frac{\partial\sum_{k=1}^{m}\left[\left(a x_{k}+b\right)-y_{k}\right]^{2}}{\partial(a)}=0 \\\frac{\partial S(a, b)}{\partial(b)}=\frac{\partial\sum_{k=1}^{m}\left[\left(a x_{k}+b\right)-y_{k}\right]^{2}}{\partial(b)}=0\end{array}\right.\Rightarrow \left\{\begin{array}{l}a \sum_{k=1}^{m} x_{k}+ \text{mb}=\sum_{k=1}^{m} y_{k} \\\sum_{k=1}^{m} a x_{k}^{2}+\sum_{k=1}^{m} b x_{k}=\sum_{k=1}^{m} x_{k} y_{k}\end{array}\right.\Rightarrow\left[\begin{array}{ll}\sum_{k=1}^{m} x_{k} & m \\\sum_{k=1}^{m} x_{k}^{2} & \sum_{k=1}^{m} x_{k}\end{array}\right]\left[\begin{array}{l}a \\b\end{array}\right]=\left[\begin{array}{l}\sum_{k=1}^{m} y_{k} \\\sum_{k=1}^{m} x_{k} y_{k}\end{array}\right]\)       (2)

E1KOBZ_2019_v13n12_5990_f0002.png 이미지

Fig. 2. Points distributed on both sides of a straight line in the rectangular coordinate system

 

2.2 Coordinate rotation algorithm

 As shown in Fig. 3, let \(\overrightarrow{\mathrm{OA}}\)  denote an vector in X-Y  , r denote the space distance between the origin point O and an arbitrarily point A, and ϕ denote the angle between \(\overrightarrow{\mathrm{OA}}\)  and X axis. Coordinates of A are \(x_{A}=r \cos \phi\)  and \(y_{A}=r \sin \phi\) . Let \(\overrightarrow{\mathrm{OB}}\)  be another vector which is obtained by rotating \(\overrightarrow{\mathrm{OA}}\)  around O of θ angle in counter clockwise direction. Coordinates of point B can be described in Eq.(3).

\(\left\{\begin{array}{l}x_{B}=\cos (\phi+\theta)=r \cos \phi \cos \theta-r \sin \phi \sin \theta=x_{A} \cos \theta-y_{A} \sin \theta \\y_{B}=r \sin (\phi+\theta)=r \cos \phi \sin \theta+r \sin \phi \cos \theta=x_{A} \sin \theta+y_{A} \cos \theta\end{array} \Rightarrow\left[\begin{array}{l}x_{B} \\y_{B}\end{array}\right]=\left[\begin{array}{ll}x_{A} & y_{A}\end{array}\right]\left[\begin{array}{cc}\cos (\theta) & \sin (\theta) \\-\sin (\theta) & \cos (\theta)\end{array}\right]\right.\)       (3)

E1KOBZ_2019_v13n12_5990_f0003.png 이미지

Fig. 3. Coordinate rotation method in a rectangular coordinate system

 

3. Proposed method

 In this paper, since our proposed method highlights algorithm of optimizing inclination problem of license plates in their camera captured images, their segmentation and recognition methods are not the focus of this paper, but can be consulted in Ref.[19], Ref.[20] and Ref.[21]. Our proposed method consisted of four parts as shown in Fig. 4:

 Part1, called “License plate edge finding operation,” finds the edge of license plate through two steps: transforming the input license plate image from RGB space to gray space, and finding the edge of license plate according to its gray image in step1.

 Part2, called “License plate angle computation operation,” computes the inclined angle of license plate through two steps: finding the rectangle information area of license plate to choose some special edge pixels as samples for computing its inclined angel, and using LMS method to compute the inclined angle of license plate basing on the coordinates of chosen special edge pixels in Step1.

 Part3, called “License plate rotating operation,” rotates license plate based on the inclined angle in Part2 through two steps: using coordinate rotation method to rotate coordinates of the input image basing on the inclined angle in Part2, and replacing pixels of the rotated coordinates with pixels of the coordinates before rotating as the output license plate image.

 Part4, called “License plate reconstruction operation,” reconstructs the bad performance in Part3 through two steps: using coordinate rotation method to rotate coordinates of some not well rotated pixels in Part3, and using the bilinear interpolation method to further improve the performance of output image.

E1KOBZ_2019_v13n12_5990_f0004.png 이미지

Fig. 4. Structure diagram of our proposed method

 

3.1 Edge information finding operation

 Fig. 5 shows the definition about inclined angel of license plate. Dotted line 1 and dotted line 2 denote the horizontal direction and the upper edge of license plate respectively. Compared with the horizontal direction, the intersected angle   between dotted line 1 and dotted line 2 denotes the inclined angle of license plate.

E1KOBZ_2019_v13n12_5990_f0005.png 이미지

Fig. 5. Definition about the inclined angle of license plate

 Let U1 and U2  denote images of license plate in color and grey space respectively. At each location (i,j) , transforms between color pixel value U1i,j   and  gray pixel value  U2i,j can be described in Eq.(4).

\(U_{i, j}^{2}=U_{i, j}^{1, R} \times 0.299+U_{i, j}^{1, G} \times 0.587+U_{i, j}^{1, B} \times 0.114\)       (4)

where \(U_{, j}^{R}, U_{1, j}^{0, G}, U_{, j}^{B, B}\)are the RGB components of U1i,j . Let UB  denote an edge image of U2  with each pixel value described in Eq.(5).

\(U_{\mathrm{i}, \mathrm{j}}^{3}=\left\{\begin{array}{ll}255 & \left|U_{\mathrm{i}+1, \mathrm{j}}^{2}-\mathrm{U}_{\mathrm{i}, \mathrm{j}}^{2}\right| \geq \text { threshold } \\0 & \left|U_{\mathrm{i}+1, \mathrm{j}}^{2}-\mathrm{U}_{\mathrm{i}, \mathrm{j}}^{2}\right|<\text { threshold }\end{array}\right.\)       (5)

where  U3i,j denotes result about judging whether  U2i,j  is an edge pixel with  . If the gradient between U2i+1,j  and U2i,j  is larger than t hr eshol d, U2i,j ,   can be judged as an edge pixel of license plate. Fig. 6a and Fig. 6b are the images of license plate in color and gray space respectively. Fig. 6c is the gradient image of Fig. 6b. Fig. 6d, Fig. 6e and Fig. 6f are the edge images of Fig. 6b with  t hr eshol d = 20, t hr eshol d=50  and t hr eshol d =100  respectively. Seen from Fig. 6d to Fig. 6f, when t hr eshol d  increases from 20 to 100, the chosen edge pixels become gradually disappeared.

 Eq.(5) is mainly used to find the edge pixels which will be computed as the samples of Eq.(8) to compute the inclined angle of license plate. However, if threshold is set too small, more edge pixels like in Fig. 6d will be chosen as the samples of Eq.(8), representing more edge information but costing large computation. Conversely, if threshold is set too large, less edge pixels like in Fig. 6f will be chosen as the samples of Eq.(8), reducing computation cost but representing little edge information. Therefore, threshold cannot be set too large or too small. Like Fig. 6e, we usually adopts t hr eshol d \(\in[40,60]\)  as parameters of Eq.(5) to obtain the edge pixels, not only representing edge information of license plate but also reducing computation cost in Eq.(8).

E1KOBZ_2019_v13n12_5990_f0006.png 이미지

Fig. 6. Example about obtaining edge pixels of license plate:  (a) color image, (b) gray image, (c) gradient image,  (d) edge image (threshold=20), (e) edge image (threshold=50), (f) edge image (threshold=100)

 

3.2 Inclined angle computing operation

 Basing on the edges of license plate, steps of computing \(\theta\)  are as follows:

 Step1: finding information area

 Usually, as shown in Fig. 6a, a license plate image usually include not only license plate itself but also other parts, such as the front bumper, the bonnet etc. Hence, license plate itself area, denoted “rectangle” in U3 , is enough to consider its inclined angle, with “up,” “down,” “left” and “right” being its four sides:

\(\left\{\begin{array}{ll}\text { up: } \sum_{t=1}^{H_{1}} U^3_{\text{up,t}} \geqslant 0, \quad 0 \leqslant \text{up} \leqslant \frac{1}{3} W \\\text { down: } \sum_{t=1}^{H_1} U^3_{\text {down,t }} \geqslant 0, \frac{2}{3} W \leqslant \text { down } \leqslant W \\\text { Ieft }: \sum_{t=1}^{W_1} U^3_{\text {t,Ieft }} \geqslant 0, 0 \leqslant \text {left} \leqslant \frac{1}{7} H \\\text { right }: \sum_{t=1}^{W_1} U^3_{ \text {t,right }} \geqslant 0, \frac{6}{7} H \leqslant \text { down } \leqslant H\end{array}\right.\)       (6)

 Let us take Fig. 6e as an example of finding its “rectangle” information area in license plate. When Fig. 6e is applied by Eq.(6), “up=15” and “down=43” as shown Fig. 7a, “left=8” and “right=108” as shown Fig. 7b, together combining the “rectangle” area of Fig. 7c. Compared Fig. 6e with the “rectangle” area of Fig. 7c, image size is decreased from 112×64 to 101×29. Therefore, 4239 pixels are not necessary to be considered as the information of license plate when computing its inclined angle in the following steps.

E1KOBZ_2019_v13n12_5990_f0007.png 이미지

Fig. 7. Example about finding “rectangle” information area: (a) “up=15”  and “down=43”, (b) “left=8” to “right=108”, (c) “rectangle” area

 Step2: choosing special edge pixel

 Although “rectangle” area contains almost the only information of license plate itself, it still includes more edge pixels and will cost large computations when using Eq.(8) to compute the inclined angle of license plate. Therefore, in order to not only reduce computation cost but also represent the edge information, Eq.(7) is used to choose “special” edge pixels inside “rectangle” area as the samples of Eq.(8) to compute the inclined angle of license plate.

\(\left\{\begin{aligned}\text { special if } \mathrm{U}_{\mathrm{i}, \mathrm{j}}^{3}=255, \mathrm{U}_{\mathrm{i}+1, \mathrm{j}}^{3}=255, \mathrm{U}_{\mathrm{i}+2, \mathrm{j}}^{3}=255, \mathrm{U}_{\mathrm{i}+3, \mathrm{j}}^{3}=255 \\\text { no-special otherwise }\end{aligned}\right.\)       (7)

 According to Eq.(7), if pixels, in four continuous rows and one column, are all edge pixels, the one in the first row can be seen as the “special” sample. The red cycled pixel in Fig. 8a is an example of “special” pixel, because other samples, aligned in the row 3 & column 4, row 4 & column 3, row 5 & column 3, are all edge pixels. When Fig. 8b is operated by Eq.(7), the found “special” samples are shown in Fig. 8c. Compare with Fig. 8b which has 351 edge pixels in the “rectangle” area, Fig. 8c only use 36 edge pixels to represent the inclined information of license plate, further reducing computation on 315 edge pixels when using Eq.(8) to compute the inclined angle of license plate.

E1KOBZ_2019_v13n12_5990_f0008.png 이미지

Fig. 8. Example about “special” edge pixel: (a) an enlarged part of Fig. 8b, (b) “rectangle” area, (c) “special” edge pixel

 Step3: computing the inclined angle

 According to the coordinates of “special” edge pixels, \(\theta\)  can be computed as following by Eq.(2):

\(\theta=\mathrm{a} \times 180 / \pi=\left(\frac{\sum_{\mathrm{k}=1}^{\mathrm{tol}} k(i) \times k(j)-\frac{1}{\mathrm{tol}} \sum_{\mathrm{k}=1}^{\mathrm{tol}} k(i) \sum_{\mathrm{k}=1}^{\mathrm{tol}} k(j)}{\sum_{\mathrm{k}=1}^{\mathrm{tol}} k(i)^{2}-\frac{1}{\mathrm{tol}}\left(\sum_{\mathrm{k}=1}^{\mathrm{tol}} k(i)\right)^{2}}\right) \times 180 / \pi\)       (8)

where tol denote number of “special” edge pixel, k(i)  and k(j)  denote the horizontal coordinate and the vertical coordinate of kth “special” edge pixel respectively. When pixel coordinates of Fig. 7c are applied by Eq.(8), its slope angle is -0.0207 in circular space, i.e. -1.19º in angle space. It means that Fig. 7c is inclined -1.19º when compared with the horizontal direction.

 

3.3 License plate rotating operation

 License plate rotation, operation of rotating the input image U1 around its center   \((0.5 \mathrm{W}, 0.5 \mathrm{H})\) by \(\theta\)  angle, included Part A and Part B as following:

 Part A: coordinate rotating map

 This part mainly rotates the coordinates of   to that of output image   around the center of   by   angel through the following steps.

 1. Mapping coordinate \((i, j)\)  in I-J  to coordinate (x,y)  in X-Y .

\([\mathrm{x}, \mathrm{y}, 1]=[\mathrm{i}, \mathrm{j}, 1]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\-0.5 \mathrm{W}_{1} & 0.5 \mathrm{H}_{1} & 1\end{array}\right]\)       (9)

 2. Rotating one point(x1,y1)   around (x=0,y=0)  by \(\theta\)  angle to the other point (x2,y2)  in counter clockwise direction according to Eq.(3) in X-Y  .

\(\left[x_{2}, \mathrm{y}_{2}, 1\right]=\left[x_{1}, \mathrm{y}_{1}, 1\right]\left[\begin{array}{cc}\cos (\theta) & -\sin (\theta) & 0 \\\sin (\theta) & \cos (\theta) & 0 \\0 & 0 & 1\end{array}\right]\)       (10)

 3. Mapping coordinate (x,y)  in X-Y  to coordinate  \((\hat{i}, \hat{j})\) in \(\hat{I}-\hat{J}\)  .

\([\hat{l}, \hat{j}, 1]=[x, y, 1]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\0.5 W_{2} & 0.5 \mathrm{H}_{2} & 1\end{array}\right]\)       (11)

where  H1  and W1  are the height and width of input image U1,Hw  and W2 are the height and width of output image OUT  , (i,j)   and I-J   denote the pixel coordinate and coordinate system of U1  respectively,  \((\hat{i}, \hat{j})\) and \(\hat{I}-\hat{J}\)  denote the pixel coordinate and coordinate system of   respectively. Since coordinate rotation is based on rotating license plate around its center pixel, center coordinate of I-J  is the origin coordinate of X-Y , i.e. (i=0.5W,j=0.5H1)   of I-J  is (x=0,y=0)  of X-Y . From step1 to step3, coordinates rotating mapping, from the input image to the output image, can be described as following:

\([\hat{l}, \hat{j}, 1]=\left[\begin{array}{ll}\mathrm{i}, j, 1\end{array}\right]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\-0.5 \mathrm{W}_{1} & 0.5 \mathrm{H}_{1} & 1\end{array}\right]\left[\begin{array}{ccc}\cos (\theta) & \sin (\theta) & 0 \\\sin (\theta) & \cos (\theta) & 0 \\0 & 0 & 1\end{array}\right]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\0.5 \mathrm{W}_{2} & 0.5 \mathrm{H}_{2} & 1\end{array}\right]\)       (12)

 Fig. 8 demonstrates an coordinate rotation example from U1 to OUT . Fig. 8a shows a \(5 \times 5\)  pixels image with W1=4  and H1=4 . According to Eq.(9), x=i-2   and y=2-j . Fig. 8b shows that (i,j)=(2,2) is (x,y)=(0,0) . Fig. 8c shows that  \(\left(x_{1}, y_{1}\right)=(1,1)\) changes to \(\left.\left(x_{2}, y_{2}\right)=0, \sqrt{2}\right)\)  when rotated around (0,0)  by \(\theta=45^{\circ}\)  in counter clockwise direction. \(W_{2}=4 \sqrt{2}\) and \(H_{2}=4 \sqrt{2}\) . Fig. 8d shows that \(\hat{\mathrm{i}}=\mathrm{x}_{2}+0.5 \mathrm{W}=2 \sqrt{2}\)  and \(\hat{\mathbf{j}}=y_{2}+0.5 \mathrm{H}_{2}=\sqrt{2}\) . That is to say that  \((i, j) \neq 2,2)\) in the input image is mapped into  \((\hat{\mathbf{i}}, \hat{\mathbf{j}})=(2 \sqrt{2}, \sqrt{2})\) in the output image.

E1KOBZ_2019_v13n12_5990_f0009.png 이미지

Fig. 8. Coordinate rotation example from U1  to OUT :  (a) input image coordinate system I-J , (b) mathematical coordinate system X-Y  ,  (c) coordinate rotating in counter clockwise direction, (d) output image coordinate system \(\hat{\mathbf{I}}-\hat{\mathbf{J}}\)

 Part B: pixel replacing

 Basing on the coordinate rotation map in Part A, each output pixel value OUTi,j is defined as follows:

\(\mathrm{OUT}_{\hat{\imath}, \overline{\mathrm{j}}}=\mathrm{U}_{\mathrm{ij}}^{1}\)       (13)

 If  \((\hat{i}, \hat{j})\) is an integer number coordinate, \(\mathrm{OUT}_{\hat{\imath}},_{\hat{\jmath}}\)  is replaced by \(U^{1}_{\mathrm{i},j}\) . However, if \((\hat{i}, \hat{j})\)  is an irrational coordinate,  \(\mathrm{OUT}_{\hat{\imath}},_{\hat{\jmath}}\) cannot be replaced by \(U^{1}_{\mathrm{i},j}\)  because  \((\hat{\mathbf{I}},\hat{\mathbf{J}})\) is not a real image coordinate. Fig. 9a is the obtained “rectangle” area of Fig. 6a. Fig. 9b is the result about rotating Fig. 9a around its center pixel by its inclined angle -1.19º. Seen from Fig. 9b, the red marked pixels are the samples whose coordinate map is from the integer number in U1  to the irrational number in  OUT.

E1KOBZ_2019_v13n12_5990_f0010.png 이미지

Fig.  9. Example about license plate rotating result: (a) “rectangle” area, (b) image rotation

 

3.4 License plate reconstructing operation

 License plate reconstruction, operation of reconstructing the red marked pixels like in Fig. 9b, included Part A and Part B as follows:

 Part A: coordinate rotating

 This part mainly rotates the coordinates of output image OUT  to that of input image U1 around the center of OUT  by  \(\theta\) angel through the following steps.

 1.  Mapping coordinate \((\hat{i}, \hat{j})\)  in  \(\hat{\mathbf{I}},\hat{\mathbf{J}}\) to coordinate   in  .

\([\mathrm{x}, \mathrm{y}, 1]=[\hat{\imath}, \hat{\jmath}, 1]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\-0.5 \mathrm{W}_{2} & 0.5 \mathrm{H}_{2} & 1\end{array}\right]\)       (14)

 2. Rotating one point (x1,y1)  around (x=0,y=0)  by \(\theta\)  angle to the other point (x2,y2)  in clockwise direction according to Eq.(3) in X-Y .

\(\left[x_{2}, y_{2}, 1\right]=\left[x_{1}, y_{1}, 1\right]\left[\begin{array}{ccc}\cos (\theta) & \sin (\theta) & 0 \\-\sin (\theta) & \cos (\theta) & 0 \\0 & 0 & 1\end{array}\right]\)       (15)

 3.  Mapping coordinate (x,y) in X-Y  to coordinate (i,j)   in I-J  .

\([\mathrm{i}, \mathrm{j}, 1]=[\mathrm{x}, \mathrm{y}, 1]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\0.5 \mathrm{W}_{1} & 0.5 \mathrm{H}_{1} & 1\end{array}\right]\)       (16)

 Similar to Eq.(12), coordinate mapping, from the output image to the input image, can be described as following:

\([i, j, 1]=[\hat{i}, \hat{j}, 1]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\-0.5 W_2 & 0.5 H_{2} & 1\end{array}\right]\left[\begin{array}{ccc}\cos (\theta) & \sin (\theta) & 0 \\-\sin (\theta) & \cos (\theta) & 0 \\0 & 0 & 1\end{array}\right]\left[\begin{array}{ccc}1 & 0 & 0 \\0 & -1 & 0 \\0.5 W_1 & 0.5 H_1 & 1\end{array}\right]\)       (17)

 Compared with Section 3.2, Step1 and Step3 are the same, while the rotation directions in Step2 are opposite. Fig. 10 demonstrates an example about coordinate rotation from OUT  to U1 . Fig. 10a is the same as Fig. 7d with \(​ W_{2}= ​4 \sqrt{2}\)   and \(​ H_{2}= ​4 \sqrt{2}\) . According to Eq.(9), \(x=\hat{i}-2 \sqrt{2}\)  and  \(y=2 \sqrt{2}-\hat{j}\). Fig. 10b shows that \((\hat{i}, \hat{j})=(2 \sqrt{2}, 2 \sqrt{2})\)  is \((x, y)=(0,0)\) . Fig. 10c shows that  \(\left.\left(x_{1}, y_{1}\right) =( 0, \sqrt{2}\right)\) changes to \(\left(x_{2}, y_{2}\right)=(1,1)\)  when rotated around (0,0)  by \(\theta=45^{\circ}\)  in clockwise direction. W1=4 and H1=4 . Fig. 10d shows that i=x+0.5W=3  and  j=y+0.5H1 =1. That is to say that \((\hat{i}, \hat{j})=(2 \sqrt{2}, 2 \sqrt{2})\) in the input image is mapped into \((i, j)=(3,1)\)  in the output image.

E1KOBZ_2019_v13n12_5990_f0011.png 이미지

Fig. 10. Coordinate rotation example from OUT  to  U1 : (a) output image coordinate system \(\hat{\mathbf{I}}-\hat{\mathbf{J}}\) , (b) mathematical coordinate system  X-Y, (c) coordinate rotating in clockwise direction, (d) input image coordinate system I-J .

 Part B: pixel evaluating

 For the pixels which are not well rotated like the red marked ones in Fig. 9b, basing on the coordinate mapping from \((\hat{i}, \hat{j})\) to \((i, j)\) , we make the neighbors of \(U^{1}_{\mathrm{i},j}\)  with bilinear interpolation operation to evaluate their outputs as follows:

\(\begin{aligned}\mathrm{OUT}_{\hat{\imath}, \hat{\mathrm{j}}}=\mathrm{d}_{1} \times d_{2} & \times U_{\mathrm{i}, \mathrm{j}}^{1,1}+\underline{\mathrm{d}}_{1} \times\left(1-\mathrm{d}_{2}\right) \times U_{\mathrm{i}, \mathrm{j}}^{1,2}+\left(1-d_{1}\right) \times d_{2} \times U_{\mathrm{i}, \mathrm{j}}^{1,3}+(1\\&\left.-d_{1}\right) \times\left(1-\mathrm{d}_{2}\right) \times U_{\mathrm{i}, \mathrm{j}}^{1,4}\end{aligned}\)       (18)

where\( U_{\mathrm{i}, \mathrm{j}}^{1,1}, U_{\mathrm{i}, \mathrm{j}}^{1,2}, U_{\mathrm{i}, \mathrm{j}}^{1,3}\)  and \( U_{\mathrm{i}, \mathrm{j}}^{1,4}\)  are the upper & left, the lower & left, the upper & right, and the lower & right neighbors about \(U^{1}_{\mathrm{i},j}\) , d1  and d2  are space distance between \(U^{1}_{\mathrm{i},j}\)  and \( U_{\mathrm{i}, \mathrm{j}}^{1,1}\) . If (i,j)  is an irrational number, coordinates of its four neighbors are all integer numbers. For example, if  \((i, j)=(2 \sqrt{2}, \sqrt{2})\) , \(U_{2 \sqrt{2} \sqrt{2}}^{1,1} = U_{2,1}^{1} , U_{2 \sqrt{2} \sqrt{2}}^{1,2} = U_{2,2}^{1} , U_{2 \sqrt{2} \sqrt{2}}^{1,3} = U_{3,1}^{1}\) and \(U_{2 \sqrt{2} \sqrt{2}}^{1,4} = U_{3,2}^{1}\) .

 Fig. 11 is an example about license plate reconstruction result. Firstly, Fig. 11a, the same as Fig. 9b, is rotated 1.19º around the center pixel in clockwise direction. Then, two pixels, marked red in Fig. 11a, are evaluated by samples in Fig. 9a according to Eq.(18), outputting the reconstruction performance as shown in Fig. 11b.

E1KOBZ_2019_v13n12_5990_f0012.png 이미지

Fig.  11. Example about license plate reconstruction result: (a) image rotation, (b) image reconstruction

 From all the above, each step of our algorithm can be described as follows:

 1. Finding the edge of license plate basing on its gray image U2 .

 2. Using the coordinates of special pixels in license plate as the samples of LMS to compute the inclined angle  .

 3. Rotating (i,j)  of U1  around (i=0,5W1,j=0.5H1)  by \(\theta\)  angle in counter clockwise direction to \((\hat{\mathbf{i}}, \hat{\mathbf{j}})\)  of OUT  and outputting each pixel value \(OUT_{(\hat{\mathbf{i}}, \hat{\mathbf{j}})} = U^{1}_{i,j}\)  .

4. Rotating \((\hat{\mathbf{i}}, \hat{\mathbf{j}})\)  of  OUT around(i=0,5W2,,j=0.5H2)   by \(\theta\)  angle in clockwise direction to (i,j)   of  U1 on pixels which are not well rotated in Step3 and evaluating them by bilinear interpolation method according to Eq.(18).

 

4. Experiment Result and Discussion

4.1 Experimental environment

 For evaluating the performance of our proposed method, we take our experiment on a real road under different weather and different light condition. Fig. 12 shows a description of the environment. The pole is installed near the road intersection, and the cameras are placed upon the pole 6 meters above the ground. On each road lane, when vehicles are about 20 to 30 meters far away from the projection of the pole, vehicles will be captured by the cameras. Usually, speeds about the moving vehicles are around 80~100km/h. In addition, our proposed algorithm is installed in cameras by C++ language under the embedded Linux system.

E1KOBZ_2019_v13n12_5990_f0013.png 이미지

Fig. 12. Experimental framework: (a) sketch map, (b) real environment

 

4.2 Different kinds vehicle experimental

 In order to show the visual performance of our proposed method, Fig. 13, Fig. 14 and Fig. 15 show the results of our proposed method on the kinds of the private, the coach, the training and the police vehicles. In each figure, images in the first row are input license plates which are separated from their camera captured images, images in the second row are edges of input images, images in the third row are rectangle character areas of license plates, images in the fourth row are results of our proposed methods. Seen from the fourth row images in each figure, our proposed method can solve the inclination problem of license plates in better visual performances by separating license plate accurately and solving their inclination problems.

E1KOBZ_2019_v13n12_5990_f0014.png 이미지

Fig. 13. Experiment about our proposed method on the private vehicles

E1KOBZ_2019_v13n12_5990_f0015.png 이미지

Fig. 14. Experiment about our proposed method on the coach vehicles

E1KOBZ_2019_v13n12_5990_f0016.png 이미지

Fig. 15. Experiment about our proposed method on the training and the police vehicles.

 

4.3 Different impact experiment

 In addition, Fig. 16 shows the performance of our proposed method when the experiment environment is affected by negative factors. The driver face is marked red to protect its privacy in the whole experiment. Fig. 16a and Fig. 16d demonstrate the experiment under the condition when vehicles deviate from road lane, with license plates inclined in their traffic camera images. Fig. 16b, Fig. 16e, Fig. 16c and Fig. 16f demonstrate the experiment under the condition of rain and night which may influence the brightness, color and definition of license plates. Seen from the bottom of each picture, the left images are license plates which are separated from their camera captured images, the right images are the results of our proposed method by separating license plates from the left images accurately and solving their inclination problems. That is to say that the propose method can overcome the bad influence of lane deviation and adverse illumination effects etc.

E1KOBZ_2019_v13n12_5990_f0017.png 이미지

Fig. 16. Experiment about our proposed method on different negative factors:  (a) and (b) lane deviation, (c) and (e) rain condition, (c) and (f) night condition.

 

4.4 Recognition performance analysis

 To examine the performance of our proposed method in improving license plate recognition rate, we design the experiment by using or not using our proposed method before implementing some latest methods to recognize license plate. License plate kinds include the private, coach, training and police, and quantity of each kind license plate throughout the whole experiment is 1000. Seen from Table 1, if “NO” using our proposed algorithm to solve the inclination problem of license plates before implementing Ref.3 to recognize license plates, for license plate recognition performance on the private, the coach, the training and the police license plates respective, mistakes on English letters are 32, 29, 36, 45, and mistakes on Arabic numbers are 30, 28, 37, 42. Conversely, if “YES” using our proposed method to solve the inclination problem of license plates before implementing Ref.3 to recognize license plates, mistakes on English letters decrease to 25, 21, 24, 23, and mistakes on Arabic numbers decrease to 21, 18, 21, 18. Suppose each wrong recognized license plate has only one mistake in English letter or Arabic number, license plate recognition rate of Ref.3 increased from 93.0% when “NO” using our method to 95.7% when “YES” using our method. When the left methods in Table 1 are the same analyzed like Ref.3, comparing “NO” using our proposed method with “YES” using our proposed method, license plate recognition rates of Ref.22 and Ref.23 are increased from 92.7% to 95.9%, and from 93.1% to 95.8% respectively. Therefore, we can say that our proposed method can increase the license plates recognition rate when used as the image preprocessing method.

Table 1. Performance of “No” or “YES” using our proposed method before recognizing license plate

E1KOBZ_2019_v13n12_5990_t0001.png 이미지

 

5. Conclusion

 In this work, a novel way is proposed to solve the inclination of the license plate in their camera captured images. Firstly, special edge pixels are chosen as the samples of least square method to compute the inclination angle of license plates. Then, the license plate is rotated according to its computed inclination angle and around its center pixels. At last, bilinear interpolation methods are used to improve the performance of license plate rotation. Several experimental results demonstrated that our proposed method can solve the inclination problem about license plate in visual aspect and can improve the recognition rate when used as the image preprocessing method.

 

Acknowledge

 Thanks the reviewers for giving good advice to help us improve the quality of this paper. The work is supported by the “Public Security Theory and Soft Science Research Program (2018LLYJGAJT043)” project.

References

  1. License plates of motor vehicles of the People's Republic of China. GA/36.
  2. Christos-Nikolaos E Anagnostopoulos, Ioannis E. Anagnostopoulos, Ioannis D. Psoroulas, Vassili Loumos, and Eleftherios Kayafas, "License Plate Recognition From Still Images and Video Sequences: A Survey," IEEE Trans. On Intelligent Transportation Systems, Vol.9, no. 3, pp.377-391, 2008. https://doi.org/10.1109/TITS.2008.922938
  3. Palaiahnakote Shivakumara, Dongqi Tang, Maryam Asadzadehkaljahi, Tong Lu, Umapada Pal, Mohammad Hossein Anisi, "CNN-RNN based method for license plate recognition," CAAI Transactions on Intelligence Technology, Vol.3, No.3, pp.169-175, July, 2018. https://doi.org/10.1049/trit.2018.1015
  4. Alexandre Perez, Hedi Tabia, David Declercq, Alain Zanotti, "Feature covariance for human action recognition," in Proc. of International Conference on Image Processing Theory, Tools and Applications., December 2016.
  5. A. Jalal, Md. Zia Uddin, T.-S. Kim, "Depth video based human activity recognition system using translation and scaling in variant features for life logging at smart home," IEEE Transactions on Consumer Electronics, Vol.58, No.3, pp.863-871, August 2012. https://doi.org/10.1109/TCE.2012.6311329
  6. Md.Taufeeq Uddin, Md.Muttlaleb Billah, Md.Faisal Hossain. "Random forests based recognition of human activities and postural transitions on smart phone," in Proc. of International Conference on Informatics, Electronics and Vision, May 2016.
  7. AhmadJalal, Yeon-HoKim, Yong-JoongKim, ShaharyarKamal, DaijinKim. "Robust human activity recognition from depth video using spatiotemporal multi-fused features," Pattern Recognition, Vol.61, pp.295-308, 2017. https://doi.org/10.1016/j.patcog.2016.08.003
  8. Faisal Farooq, Jalal Ahmed, Lihong Zheng, "Facial expression recognition using hybrid features and self-organizing maps," in Proc. of IEEE International Conference on Multimedia and Expo., July 2017.
  9. Shohei Suzuki, Kunihito Kato, Kazuhiko Yamamoto, "A consideration of effective feature extraction method for Nail camera system," in Proc. of 17th Korea-Japan Joint Workshop on Frontiers of Computer Vision., March 2011.
  10. Ahmad Jalal, Md. Zia Uddin, Jeong Tai Kim, Tae-Seong Kim, "Recognition of human home activities via depth silhouettes and R transformation for smart homes," Indoor and Built Environment, Vol. 21, No.1, pp. 184-190, September 2011. https://doi.org/10.1177/1420326X11423163
  11. Ahmad Jalal, Shaharyar Kamal, Daijin Kim, "Shape and Motion Features Approach for Activity Tracking and Recognition from Kinect Video Camera" in Proc. of 29th International Conference on Advanced Information Networking and Applications Workshops, pp.445-450, March 2015.
  12. Karpuravalli Srinivas Raghunandan, Palaiahnakote Shivakumara, Lolika Padmanabhan, Govindaraju Hemantha Kumar, Tong Lu, Umapada Pal, "Symmetry features for license plate classification," CAAI Transactions on Intelligence Technology, Vol.3, No.3, pp.176 -183, November 2018. https://doi.org/10.1049/trit.2018.1016
  13. Adnan Farooq, Ahmad Jalal, Shaharyar Kamal, "Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map," KSII Transactions on internet and information system, Vol.9, No.5, pp.1856-1869, May 2015. https://doi.org/10.3837/tiis.2015.05.017
  14. Maxim Karpushin, Giuseppe Valenzise, Frederic Dufaux. "Local visual features extraction from texture depth content based on depth image analysis," in Proc.of IEEE International Conference on Image Processing, October 2014.
  15. P. Vanicek, "Further Development and Properties of the spectral Analysis by Least squares," Astrophysics & Space Science, vol.12, no.1, pp. 10-33, 1971. https://doi.org/10.1007/BF00656134
  16. N. R. Lomb, "Least squares frequency analysis unequally spaced data," Astrophysics and space science, vol. 39, pp. 447-462, 1976. https://doi.org/10.1007/BF00648343
  17. A. Acharyya, K. Maharatna, B. M. Al-Hashimi, J. Reeve, "Coordinate Rotation Based Low Complexity N-D FastICA Algorithm and Architecture," IEEE Transactions on. Signal Process., vol. 59, no. 8, pp. 3997-4011, 2011. https://doi.org/10.1109/TSP.2011.2150219
  18. Lie-Chung Shen, Jyh-Ching Juang, Ching-Lang Tsai, Ching-Liang Tseng, "Monitoring Water Levels and Currents Using Reflected GPS Carrier Doppler Measurements and Coordinate Rotation Model," IEEE Transactions on Instrumentation and Measurement, vol.59, no.1, pp.153-163, 2010. https://doi.org/10.1109/TIM.2009.2022113
  19. S. G. Kim, H. G. Jeon, H. I. Koo, "Deep learning based license plate detection method using vehicle region extraction," Electronics Letters, vol.53, pp. 1034-1036, 2017. https://doi.org/10.1049/el.2017.1373
  20. Yule Yuan,Wenbin Zou, Yong Zhao, Xinan Wang, Xuefeng Hu, Nikos Komodakis, "A robust and efficient approach to license plate detection," IEEE Transactions on Image Processing, vol.26, pp. 1102-1114, 2017. https://doi.org/10.1109/TIP.2016.2631901
  21. Bo Li, Bin Tian, Ye Li, Ding Wen, "Component based license plate detection using conditional random field model," IEEE Transactions on Intelligent Transportation Systems, vol.14, pp. 1690-1699, 2013. https://doi.org/10.1109/TITS.2013.2267054
  22. Xiang Bai, Cong Yao, WenYu Liu, "Strokelets: a learned multi-scale mid-level representation for scene text recognition," IEEE Transactions on Image Processing, vol.25, no.6, pp. 2789-2802, 2016. https://doi.org/10.1109/TIP.2016.2555080
  23. Yun Yang, Donghai Li, Zongtao Duan, "Chinese vehicle license plate recognition using kernel-based extreme learning machine with deep convolutional features," IET Intelligent Transport Systems, Vol.12, No.3, pp.213-219, 2018. https://doi.org/10.1049/iet-its.2017.0136