DOI QR코드

DOI QR Code

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen (College of Information Technology Engineering, Tianjin University of Technology and Education) ;
  • Wang, Minjuan (College of Information and Electrical Engineering, China Agricultural University) ;
  • Gao, Wanlin (College of Information and Electrical Engineering, China Agricultural University)
  • Received : 2019.10.15
  • Accepted : 2020.01.02
  • Published : 2020.11.30

Abstract

The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

Keywords

1. Introduction

Nowadays, to accurately detect the health of animals, the research on multimodal feature description algorithm of animals has been an active topic. In this paper, the shape and temperature of pig-body are selected as the research objects.

1.1 Pig-body Shape Detection

The current pig-body shape detection systems capture pig-bodies in controlled environment. Based on controlled conditions, the pig-body detection algorithm was presented in view of visible (VI) images, which realized high accuracy [1]. Nevertheless, it can not usefully detect pig-body shape in dimmer situation, which are shown in Fig. 1(e)(g). Since targets could be discovered in infrared (IR) images in view of dimmer situation, the animal-body detection was realized in view of infrared images [2]. Nevertheless, owing to the influence of visual angle and situation, it can not detect ears usefully in variable illumination and as revealed in Fig. 1(b)(d)(f)(h). Under variable illumination situations, the detection results were various in ears, legs and tails in multisource images, as revealed in Fig. 1. Accordingly, to enhance the detection of pig-body shape, the multisource image fusion can offer effective information [3][4].

Fig. 1. The detection results of pig-body in view of multisource images under variable environments (a)-(d)clearer environments (e)-(h)dimmer environments

1.2 Pig-body Temperature Detection

Owing to the characteristics of non-contact, rapid detection, non-destructive, infrared thermal image technology was used to obtain the temperature of pig-body [5][6][7]. To prove the characteristics of infrared images, a method of measuring pig head temperature was proposed based on infrared images [8]. A temperature control method of pigsty was proposed based on data distribution in infrared image, which can meet the temperature comfort of pigs [9]. At present, the highest temperature of pig-body was located in its ears [10], however, due to uncontrollable behaviors of pigs, the highest temperature was located in somewhere else. To ensure the accurate detection of highest temperature, the temperature was detected in view of accurate shape detection in this paper.

1.3 Multi-source Image Fusion

Since source image resolution resembled the aspect of a human visual system [11], transform domain-based fusion methods were widely applied, such as DCT [12][13], CT [14] and ST [15]. Nevertheless, they could be prone to arouse Gibbs phenomenon in fused image owing to sampling operator. To solve the above shortcomings, an image fusion method was proposed based on CT, which namely nonsubsampled contourlet transform (NSCT) [16]. However, this algorithm had low efficiency.

To enhance the above shortcomings, a modified shearlet transform method was proposed, which named NSST [17]. To enhance the accuracy of fusion algorithm, a new multisource image fusion algorithm was presented for pig-body shape and temperature detection in NSST domain, entitled as NSST-GF-IPCNN. Firstly, a new multisource image fusion algorithm is realized to detect pig-body shape and temperature characteristics. Then, the fused images are segmented using automatic threshold algorithm and morphological operation. Next, the temperature of pig-bodies is test in view of segmentation results. Finally, the experiments demonstrate that the presented fusion algorithm has a superior result in shape segmentation and realizes a higher segmentation rate. The flow diagram of presented algorithm is revealed in Fig. 2.

Fig. 2. Flow diagram of the presented detection framework

Generally, the contributions of our presented framework are summarized as following:

• In the low-frequency subbands, to construct a remarkable measure algorithm, the local features of multisource images are obtained using even-symmetrical Gabor filter. Then, an available fusion rule of low-frequency subbands is presented in view of multisource images.

• In the high-frequency subbands, to superior describe edge information, high-frequency features are represented using an improved pulse coupled neural network. Then, the maximum strategy is employed to enhance the segmentation of pig-body shape feature.

• The presented segmentation framework can achieve superior detection rate than other current algorithms.

The structure of our paper is: methodology is proposed in section 2, the presented method is introduced in section 3, experiment results and discussion are reported in section 4, conclusion and future works are provided in section 5.

2. Methodology

2.1 Multi-source Image Registration

In this paper, FLIR Tools was used to obtain visible and infrared images of pig-body. Owing to the problem of noise and a small amount of misregistration, multi-source images were registered by traditional method, which was shown as:

\(\left(\begin{array}{l} x_{2} \\ y_{2} \end{array}\right)=\left(\begin{array}{ll} s_{x} & s_{y} \end{array}\right)\left(\begin{array}{ll} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array}\right)\left(\begin{array}{l} x_{1} \\ y_{1} \end{array}\right)+\left(\begin{array}{l} t_{x} \\ t_{y} \end{array}\right)\)       (1)

where (x1, y1) is coordinates of pixels in infrared image, (x2, y2) is coordinates of pixels in visible image, sx and sy are scale parameters in x and y directions, respectively. \(\left(\begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array}\right)\) is rotation matrix, tx and ty are translation parameters in x and y directions, respectively.

Since the space distribution of infrared lens and digital camera lens is closer on FLIR C2, the rotation angle and translation parameters can be ignored, so the above formula should be simplified as following:

\(\left(\begin{array}{l} x_{2} \\ y_{2} \end{array}\right)=\left(\begin{array}{ll} s_{x} & s_{y} \end{array}\right)\left(\begin{array}{l} x_{1} \\ y_{1} \end{array}\right)\)       (2)

To improve the accuracy of registration, RANSAC algorithm [18] was used to estimate the optimal scale parameters.

2.2 Nonsubsampled Shearlet Transform

Owing to higher implementation efficiency and better direction description ability, the multisource images were resolved into multiscale and multidirectional subbands by the nonsubsampled pyramid (NSP) and the shear filters (SF). Firstly, one low-frequency subband and one high-frequency subband were formed by NSP in each resolution level. The multiscale subbands were obtained by iteratively resolving the low-frequency subbands to obtain the singularities in multisource images. If the number of resolution level was m, there were m+1 subbands could be obtained by NSP, whose sizes were all the same as source images. Then, the high-frequency subbands in each resolved level were resolved by SF at n orientations to obtain 2n directional subbands [19][20]. The resolution processing of nonsubsampled shearlet transform (NSST) is revealed in Fig. 3.

Fig. 3. The resolutions of NSST

The affine function of shearlet transform systems based on 2D images are defined as following:

\(I_{A S}(\varphi)=\left\{\varphi_{j, l, k}=|\operatorname{det} A|^{j / 2} \varphi\left(S^{l} A^{j} x-k\right) ; j, l \in Z, k \in Z^{2}\right\}\)       (3)

where (ϕ ) ∈L2 (R2) , j, l, k are scale, direction and spatial position parameter, respectively. For \(j \geq 0,-2^{j} \leq l \leq 2^{j}-1, k \in Z^{2}, d=0,1\) , the shearlet transform of φ is calculated and shown as:

\(\left\langle\varphi \quad \hat{\varphi}_{j, l, k}^{(d)}\right\rangle=2^{3 j / 2} \int_{R^{2}} \hat{\varphi}(\varepsilon) \overline{V\left(2^{-2 j} \varepsilon\right) W_{j, l}^{(d)}(\varepsilon)} e^{-2 \pi i \varepsilon A_{d}^{-j} \hat{s}_{d}^{-i} k} d \varepsilon\)       (4)

where \(\hat{\varphi}_{j, l, k}^{(d)}\) is the Fourier transform of shearlet, \(V\left(2^{-2 j} \varepsilon\right)\) is the function of scale, \(W_{j, l}^{(d)}(\varepsilon)\) is the function of trapezoidal window. A is the scaling matrix for multi-scale decomposition, S is the shear matrix for directional partitions, |det S|=1. In this paper, m=3, n= 22 , 22 , 23 in the three levels, respectively, j=32, 32, 16 in the three levels, respectively.

For each a>0, s∈R, A and S are presented as following:

\(A=\left(\begin{array}{cc} a & 0 \\ 0 & \sqrt{a} \end{array}\right), \quad S=\left(\begin{array}{ll} 1 & s \\ 0 & 1 \end{array}\right)\)       (5)

where a=4, s=1.

2.3 Gabor Feature Extraction

Owing to the better representation of texture feature [21], the low-frequency subbands were represented by even-symmetrical Gabor filters. Firstly, the Gabor filter with eight directions (0°, 22.5°, 45°, 67.5°, 90°, 112.5°, 135° and 157.5°) [22] was conducted, which represent G{i}. The expression of even-symmetrical Gabor filter are revealed as:

\(G\{m\}(x, y)=\frac{\gamma}{2 \pi \sigma^{2}} \exp \left\{-\frac{1}{2}\left(\frac{x_{\theta_{m}}^{2}+\gamma^{2} y_{\theta_{m}}^{2}}{\sigma^{2}}\right)\right\} \cos \left(2 \pi f_{m} x_{\theta_{m}}\right)\)      (6)

where \(\left[\begin{array}{l} x_{\theta_{m}} \\ y_{\theta_{n}} \end{array}\right]=\left[\begin{array}{cc} \cos \theta_{m} & \sin \theta_{m} \\ -\sin \theta_{m} & \cos \theta_{m} \end{array}\right]\left[\begin{array}{l} x \\ y \end{array}\right]\), fm is the center frequency of the mth direction, θm is the angle of the mth direction, i=8, γ and σ are the length-width radio and the scale of envelope, respectively. Since multisource images has various curve structure, various parameters were employed, σIR =3 and σVI =6, respectively.

Then, the magnitude features with multi-orientation were extracted by convolution with Gabor filter, the formula is shown as:

\(M\{i\}(x, y)=I(x, y) \otimes G\{i\}(x, y)\)       (7)

where I(x, y) is IR or VI image, ⊗ is convolution operator.

2.4 Improved Dual-channel Pulse Coupled Neural Network

As the development of artificial neural network, pulse coupled neural network (PCNN) was a model based on visual information system of cat, so it could restore detailed and edge information of source images and obtain better fusion effect [23][24]. However, uni-channel PCNN could only describe the details in uni-source image, and the linking strength was all the same in any part of image. Therefore, an improved dual-channel PCNN model was proposed in this paper. The dual-channel PCNN model included three parts: received field, modulation domain and pulse generator, as shown in Fig. 4.

Fig. 4. The dual-channel PCNN model based on modified spatial frequency​​​​​​​

The received part was shown as:

\(F_{i j}^{1}(n)=S_{i j}^{1}(n)\)       (8)

\(F_{i j}^{2}(n)=S_{i j}^{2}(n)\)       (9)

\(L_{i j}(n)=\left\{\begin{array}{ll} 1 & \text { if } \sum_{k, l \in N(i, j)} Y_{k l}(n-1)>0 \\ 0 & \text { otherwise } \end{array}\right.\)       (10_

where \(S_{i j}^{1}\)and \(S_{i j}^{2}\)denote the pixel values of source images, which are viewed as the external stimuli of model. \(F_{i j}^{1}\)and \(F_{i j}^{2}\) are two feeding inputs. Lij and Yij represent the linking input and the external output of neurons, respectively.

The modulation part was shown as:

\(U_{i j}(n)=\max \left\{F_{i j}^{1}(n)\left(1+\beta_{i j}^{1} L_{i j}(n)\right), F_{i j}^{2}(n)\left(1+\beta_{i j}^{2} L_{i j}(n)\right)\right\}\)       (11)

where Uij​​​​​​​ and  is the internal activity of neurons, \(B_{i j}^{1}\) and \(B_{i j}^{2}\) denote the linking strength, which describe the strength response of different feature areas in source images. The pulse generator part was shown as:

\(Y_{i j}(n)=\left\{\begin{array}{ll} 1 & \text { if } U_{i j}(n) \geq \theta_{i j}(n-1) \\ 0 & \text { otherwise } \end{array}\right.\)       (12)

\(\theta_{i j}(n)=\theta_{i j}(n-1)-\Delta+V_{\theta} Y_{i j}(n)\)       (13)

\(T_{i j}=\left\{\begin{array}{ll} n & \text { if } Y_{i j}(n)=1 \text { for the first time } \\ T_{i j}(n-1) & \text { otherwise } \end{array}\right.\)       (14)

where θij is the threshold function, Vθ denotes the threshold of fired neuron. Tij represents the number of iterations. In this paper, Vθ =20,∆=0.01. To enhance the adaptivity of dual-channel PCNN, modified spatial frequency (MSF) is adopted to determine \(B_{i j}^{1}\) and \(B_{i j}^{2}\), due to the details description ability of MSF [25].

\(\beta^{1,2}=\operatorname{MSF}\left(I_{X}\right)\)       (15)

\(M S F=\frac{1}{M \times N} \sum_{i=1}^{M} \sum_{j=1}^{N}(R F(i, j)+C F(i, j)+M D F(i, j)+A D F(i, j))\)       (16)

where IX represents VI or IR images, RF, CF, MDF and ADF represent the spatial frequency of horizontal, vertical, main-diagonal and auxiliary-diagonal directions, respectively.

3. The Proposed Method

The presented detection method aimed to enhance the accuracy rate of shape and temperature and its steps could be summarized as following:

1) NSST was adopted to obtain multi-scale and multi-direction decomposition of multisource images.

2) To usefully fuse low-frequency subbands and high-frequency subbands, various fusion rules were devised. The maximum of Gabor energy maps was used as the fusion rule of low-frequency subbands, and improved dual-channel PCNN model was used to fuse the high-frequency subbands.

3) The fused images were reconstructed by inverse NSST.

4) According to the fusion results, the shape was extracted by automatic threshold algorithm and optimized by morphological operation and then the temperature was extracted in view of segmentation.

3.1 The Fusion Rule of Low-frequency Subbands

The low-frequency subbands describe the main energy feature. To usefully represent fine-scale texture feature in low-frequency subbands, the fusion steps are as following:

Step 1: The Gabor energy maps were obtained in view of Gabor magnitude feature.

\(E(x, y)=M A X\left((\operatorname{abs}(M\{i\}(x, y)))^{\wedge} 2\right)\)       (17)

where E(·) is the Gabor energy map, M{i}(·) is magnitude features with the ith orientation.

Step 2: The maximum fusion rule was applied in view of Gabor energy feature with multi-orientation.

\(L F_{F}=\left\{\begin{array}{ll} L F_{I R}, & E_{L}^{\mathrm{IR}}(x, y) \geq E_{L}^{V I}(x, y) \\ L F_{V I}, & E_{L}^{I R}(x, y)<E_{L}^{V I}(x, y) \end{array}\right.\)       (18)

where \(E_{L}^{I R}\) and \(E_{L}^{V I}\) are Gabor energy map of multisource low-frequency subbands, respectively.

3.2 The Fusion Rule of High-frequency Subbands

The high-frequency subbands represent the coarse-scale texture and edge features. To superior describe features of coarse-scale texture and edge, an improved dual-channel PCNN in view of MSF was employed. The fusion steps of high-frequency subbands were as following:

Step1: The parameters of improved model were initialized and the MSF of each pixel was computed and viewed as linking strength, as shown in Eqs. (15)-(16).

Step2: Eqs. (8)-(14) were iteratively computed until all the neurons were activated.

Step3: The maximum was selected as fused high-frequency subbands and shown as following:

\(H F_{F}=\left\{\begin{array}{ll} H F_{I R}^{l, k}, & U_{i j}(n)=U_{i j}^{I R}(n) \\ H F_{V I}^{l, k}, & U_{i j}(n)=U_{i j}^{V I}(n) \end{array}\right.\)       (19)

where \(U_{i j}^{I R}(n)=F_{i j}^{I R}(n)\left(1+\beta_{i j}^{I R} L_{i j}(n)\right), U_{i j}^{V I}(n)=F_{i j}^{V I}(n)\left(1+\beta_{i j}^{V I} L_{i j}(n)\right)\) , l and k represent the direction and spatial parameter, respectively.

4. Experimental Results and Discussion

To prove the performance of the presented fusion algorithm, a homemade database was built by FLIR C2, which included 24 image pairs, as revealed in Fig. 5. Nevertheless, several experiments were designed. In this paper, seven objective metrics are employed to assess the fusion results, which are average gradient (AG) [26], standard deviation (SD) [27], spatial frequency (SF) [28], information entropy (IE) [29], average pixel intensity (API) [13], similarity of structure information measure (SSIM) [30] and accuracy (Acc) [31], respectively.

Fig. 5. The homemade database (a) VI images (b) IR images​​​​​​​

4.1 Performance of Presented Fusion Method

To prove the performance of the presented fusion algorithm, 24 multisource image pairs were considered in view of variable situations, four of them and fused images in view of presented model are revealed in Fig. 6(a)-(c). It was obviously showed that the fused pig-body images resembled VI images. Nevertheless, the visual effects of VI, IR, and the fused images were also proved by calculating AG, API, IE, SD, SF, and as revealed in Table 1. Besides, the comparative results of IE and SD are revealed in Fig. 6(d)-(e), which depict that higher IE and SD were extracted by fused images. They also reflected that the presented fusion algorithm had a superior enhancement in abundance and contrast of fused images.

Fig. 6. The comparable results (a) VI image (b) IR image (c) Fused image (d) comparison of IE (e) comparison of SD​​​​​​​

Table 1. Performance on presented fusion algorithm in variable situations​​​​​​​

4.2 Fusion Results of Presented Algorithm

The performance of the presented multi-source image fusion algorithm was assessed according to several experiments. To prove the performance of fusion algorithm, four multisource image pairs in variable illumination were considered and revealed in Fig. 7(a)-(b). Furthermore, the presented fusion algorithm was parallel with seven existing algorithms, which entitled, DWT [32][33], DCT [34][35], CT [36], NSCT [37], NSST [38][39], NSCT-PCNN [40], NSST-PCNN [41]. The main difference among presented algorithm and other considered algorithms were revealed in Table 2. The final fused images in view of variable situations were revealed in Fig. 7(c)-(i). It reveals that the presented algorithm has some superiorities in the visual effect and clearly describes the pig-body regions. Besides, it is also goodness to the other seven image fusion algorithms in objective assessments, as revealed in Table 3. To compare the performance of various multisource fusion algorithms, the objective assessment results were described in scatter plots and revealed in Fig. 8. This also reveals that owing to the superior representation of Gabor feature and IPCNN, the presented algorithm offered a superior enhancement in visual effect, as well as, the objective assessment in view of variable situations.

Fig. 7. The fusion results of various algorithms in view of variable situations (a) VI image (b) IR image (c) DWT (d) DCT (e) CT (f) NSCT (g) NSCT-PCNN (h) NSST (i) NSST-PCNN (j) Proposed​​​​​​​

Table 2. The difference among fusion methods​​​​​​​

Fig. 8. The comparison of various algorithms with objective assessments in view of variable situations (a)-(b) clear environment (c)-(d) dim environment

Table 3. The comparable results of various fusion algorithms in objective assessment

To prove the performance of the presented fusion algorithm in homemade database, the comparative results were described in broken line as revealed in Fig. 9. The x-axis of Fig. 9 described the number of multi-source image pairs. It reveals that the fused image quality is superior among 8 fusion algorithms. Furthermore, the running times of fusion algorithms are revealed in Table 4. Compared with NSCT-PCNN, NSST-PCNN and NSCT, the presented fusion algorithm had less computation complexity.

Fig. 9. The comparison of various algorithms with objective assessments​​​​​​​

Table 4. Computation times of various fusion methods​​​​​​​

4.3 Pig-body Detection Based on Presented Fusion Method

To testify the performance of pig-body detection in view of presented fusion method, 24 pig-body image pairs were processed to acquire 192 fused images with 8 various fusion algorithms as mentioned above. Then, pig-body areas were acquired by automatic threshold algorithm. To optimize the segmentation results, morphological processing was adopted to remove holes and noise. To evaluate the performance of detection results in variable situations, the multisource images with variable illumination were segmented in Fig. 10(a)-(b) and Fig. 11(a)-(b), respectively. The binarization of fused images with variable environments in view of various fusion algorithms were revealed in Fig. 10(c)-(i) and Fig. 11(c)-(i). The presented pig-body segmentation algorithm realized higher segmentation accuracy than all the considered fusion algorithms, as revealed in Table 5. As the table, the average performance of the presented segmentation was 98.389%, which is 2.102—4.066% higher than other considered algorithms. To test the detection performance of presented algorithm, the results were described in broken line and revealed in Fig. 12. It reveals that the presented detection algorithm realized highest detection rate among 8 various algorithms.

Fig. 10. The binarization results in view of clear environment (a) IR image (b) VI image (c) DWT (d) DCT (e) CT (f) NSCT (g) NSCT-PCNN (h) NSST (i) NSST-PCNN (j) Proposed​​​​​​​

Fig. 11. The binarization results in view of dim environment (a) IR image (b) VI image (c) DWT (d) DCT (e) CT (f) NSCT (g) NSCT-PCNN (h) NSST (i) NSST-PCNN (j) Proposed​​​​​​​

Table 5. The comparable accuracy of the presented detection method​​​​​​​

Fig. 12. The comparable results of detection accuracy​​​​​​​

4.4 The Detection Results of Pig-body Temperature

To remove the influence of surrounding temperature on pig-body temperature detection, the highest temperature was obtained in view of shape segmentation and the pixels of infrared image. Firstly, the shape segmentation results were standardized and added with infrared image. Then, the highest temperature was gained by comparing pixel by pixel. Finally, the location of highest temperature in variable environments were extracted by FLIR Tools, as revealed in Fig. 13. To detect the health degree of pig, the highest temperature was described in broken line as revealed in Fig. 14. It reveals that the highest temperature is all in the normal temperature range.

Fig. 13. The detection and position of highest temperature in variable environments

Fig. 14. The highest temperature of pig-body​​​​​​​

5. Conclusion and Future Works

A new multisource image fusion method was presented for pig-body shape and temperature detection, which entitled NSST-GF-IPCNN. First, NSST was employed to resolve multisource images into a range of multi-scale and multi-directional sub-bands. Then, even-symmetrical Gabor filter and IPCNN in view of MSF were employed to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were fused into final fusion image. Finally, the binarizations were extracted using automatic threshold algorithm and morphological processing. Besides, the highest temperature was gained in view of binary images. The experiments reveal that the presented fusion algorithm had a better performance in enhancing the average segmentation rate (98.389%) in view of pig-body images with variable environments, which is 2.102—4.066% higher than other considered algorithms. This work mainly focused on enhancing the issue of pig-body shape and temperature detection under variable environments. Some parameters of the presented method were not selected automatically. Future studies should concentrate on the adaptivity of fusion methods.

Acknowledgement

The authors would like to thank their colleagues for their support of this work. The detailed comments from the anonymous reviewers were gratefully acknowledged. This work was supported by the Key Research and Development Project of Shandong Province (Grant No. 2019GNC106091) and the National Key Research and Development Program (Grant No. 2016YFD0200600-2016YFD0200602).

References

  1. M. A. Kashiha, C. Bahr, S. Ott, C. Moons, T. A. Niewold and F. Tuyttens, "Automatic monitoring of pig activity using image analysis," Advanced Concepts for Intelligent Vision Systems. Springer International Publishing, 2013.
  2. D. Stajnko, M. Brus and M. Ho Evar, "Estimation of bull live weight through thermographically measured body dimensions," Computers and Electronics in Agriculture, vol. 61, no. 2, pp. 233-240, 2008. https://doi.org/10.1016/j.compag.2007.12.002
  3. X. Bai, F. Zhou and B. Xue, "Fusion of infrared and visual images through region extraction by using mult-scale center-surround top-hat transform," Optics Express, vol. 19, no. 9, pp. 8444-8457, 2011. https://doi.org/10.1364/OE.19.008444
  4. Weiwei Kong, Longjun Zhang, Yang Lei, "Novel fusion method for visible light and infrared images based on nsst-sf-pcnn," Infrared Physics & Technology, vol. 65, no. 7, pp. 103-112, 2014. https://doi.org/10.1016/j.infrared.2014.04.003
  5. F. R. Caldara, L. S. Dos Santos, S. T. Machado, M. Moi, de Alencar Nääs, Irenilza, and L. Foppa, "Piglets' surface temperature change at different weights at birth," Asian-Australasian journal of animal sciences, vol. 27, no. 3, pp. 431-438, 2014. https://doi.org/10.5713/ajas.2013.13505
  6. M. Alsaaod, C. Syring, J. Dietrich, M. G. Doherr, T. Gujan and A. Steiner, "A field trial of infrared thermography as a non-invasive diagnostic tool for early detection of digital dermatitis in dairy cows," The Veterinary Journal, vol. 199, no. 2, pp. 281-285, 2014. https://doi.org/10.1016/j.tvjl.2013.11.028
  7. K. Kawasue, K. D. Win, K. Yoshida and T. Tokunaga, "Black cattle body shape and temperature measurement using thermography and kinect sensor," Artificial Life and Robotics, vol. 22, pp. 464-470, 2017. https://doi.org/10.1007/s10015-017-0373-2
  8. C. Siewert, D. Hoeltig, C. Brauer, "Medial infrared imaging of the porcine thorax for diagnosis of lung pathologies," in Proc. of the 21st Int. Pig Veterinary Society Congress, Vol. II, Vancouver, pp. 663, 2010.
  9. W. Ye and H. Xin, "Thermographical quantification of physiological and behavioral responses of group-housed young pigs," Transactions of the ASAE, vol. 43, no. 6, pp. 1843-1851, 2000. https://doi.org/10.13031/2013.3089
  10. T. S. Kammersgaard, J. Malmkvist, L. J. Pedersen, "Infrared thermography-a non-invasive tool to evaluate thermal status of neonatal pigs based on surface temperature," Animal, vol. 7, no. 12, pp. 2026-2034, 2013. https://doi.org/10.1017/s1751731113001778
  11. G. Bhatnagar, Q. M. J. Wu and Z. Liu, "A new contrast based multimodal medical image fusion framework," Neurocomputing, vol. 157, pp. 143-152, 2015. https://doi.org/10.1016/j.neucom.2015.01.025
  12. E. Vakaimalar, K. Mala and B. R. Suresh, "Multifocus image fusion scheme based on discrete cosine transform and spatial frequency," Multimedia Tools and Applications, 78, 17573-17587, 2019. https://doi.org/10.1007/s11042-018-7124-9
  13. X. Jin, Q. Jiang, S. Yao, D. Zhou, R. Nie and S. J. Lee, "Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain," Infrared Physics & Technology, vol. 88, pp. 1-12, 2018. https://doi.org/10.1016/j.infrared.2017.10.004
  14. L. I. He, L. Lei, Y. Chao and H. Wei, "An improved fusion algorithm for infrared and visible images based on multi-scale transform," Semiconductor Optoelectronics, vol. 74, pp. 28-37, 2016.
  15. L. Wang, B. Li and L. F. Tian, "Eggdd: an explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain," Information Fusion, vol. 19, no. 11, pp. 29-37, 2014. https://doi.org/10.1016/j.inffus.2013.04.005
  16. Q. Zhang and B. L. Guo, "Multifocus image fusion using the nonsubsampled contourlet transform," Signal Processing, vol. 89, no. 7, pp. 1334-1346, 2009. https://doi.org/10.1016/j.sigpro.2009.01.012
  17. W. Kong, "Technique for gray-scale visual light and infrared image fusion based on non-subsampled shearlet transform," Infrared Physics & Technology, vol. 63, no. 11, pp. 110-118, 2014. https://doi.org/10.1016/j.infrared.2013.12.016
  18. K. Peter, "Model fitting and robust estimation source code for matlab,".
  19. W. G. Wan, Y. Yang, H. J. Lee, "Practical remote sensing image fusion method based on guided filter and improved SML in the NSST domain," Signal, Image and Video Processing, vol. 12, no. 5, pp. 959-966, 2018. https://doi.org/10.1007/s11760-018-1240-x
  20. X. Jin, G. Chen, J. Hou, "Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and S-PCNNs in HSV space," Signal Processing, vol. 153, pp. 379-395, 2018. https://doi.org/10.1016/j.sigpro.2018.08.002
  21. J. Yang and J. Yang, "Multi-Channel Gabor Filter Design for Finger-Vein Image Enhancement," in Proc. of the fifth Inter. Conf. on Image and Graphics, pp. 87-91, 2009.
  22. J. Yang, Y. Shi and J. Yang, "Personal identification based on finger-vein features," Computers in Human Behavior, vol. 27, no. 5, pp. 1565-1570, 2011. https://doi.org/10.1016/j.chb.2010.10.029
  23. X. Xu, D. Shan, G. Wang and X. Jiang, " Multimodal medical image fusion using pcnn optimized by the qpso algorithm," Applied Soft Computing, vol. 46, pp. 588-595, 2016. https://doi.org/10.1016/j.asoc.2016.03.028
  24. L. Tang, J. Qian, L. Li, J. Hu and X. Wu, "Multimodal medical image fusion based on discrete tchebichef moments and pulse coupled neural network," International Journal of Imaging Systems & Technology, vol. 27, no. 1, pp. 57-65, 2017. https://doi.org/10.1002/ima.22210
  25. T. Xiang, L. Yan and R. Gao, "A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain," Infrared Physics & Technology, vol. 69, pp. 53-61, 2015. https://doi.org/10.1016/j.infrared.2015.01.002
  26. Bai and Xiangzhi, "Infrared and visual image fusion through feature extraction by morphological sequential toggle operator," Infrared Physics & Technology, vol. 71, pp. 77-86, 2015. https://doi.org/10.1016/j.infrared.2015.03.001
  27. J. Ma, C. Chen, C. Li and J. Huang, "Infrared and visible image fusion via gradient transfer and total variation minimization," Information Fusion, vol. 31, pp. 100-109, 2016. https://doi.org/10.1016/j.inffus.2016.02.001
  28. W. Kong, Y. Lei, M. Ren, "Fusion method for infrared and visible images based on improved quantum theory model," Neurocomputing, vol. 212, pp. 12-21, 2016. https://doi.org/10.1016/j.neucom.2016.01.120
  29. Y. Ma, J. Chen, C. Chen, F. Fan and J. Ma, "Infrared and visible image fusion using total variation model," Neurocomputing, vol. 202, pp. 12-19, 2016. https://doi.org/10.1016/j.neucom.2016.03.009
  30. K. Ma, K. Zeng and Z. Wang, "Perceptual quality assessment for multi-exposure image fusion," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3345-3356, 2015. https://doi.org/10.1109/TIP.2015.2442920
  31. S. M. Nemalidinne and D. Gupta, "Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering," Fire Safety Journal, vol. 101, pp. 84-101, 2018. https://doi.org/10.1016/j.firesaf.2018.08.012
  32. T. Pu and G. Ni, "Contrast-based image fusion using the discrete wavelet transform," Optical Engineering, vol. 39, no. 8, pp. 2075-2082, 2000. https://doi.org/10.1117/1.1303728
  33. S. Balakrishnan, M. Cacciola, L. Udpa, B. P. Rao, T. Jayakumar and B. Raj, "Development of image fusion methodology using discrete wavelet transform for eddy current images," Ndt & E International, vol. 51, no. 10, pp. 51-57, 2012. https://doi.org/10.1016/j.ndteint.2012.06.006
  34. C. Liu, L. Jin, H. Tao, G. Li, Z. Zhuang and Zhang, Y, "Multi-focus image fusion based on spatial frequency in discrete cosine transform domain," IEEE Signal Processing Letters, vol. 22, no. 2, pp. 220-224, 2015. https://doi.org/10.1109/LSP.2014.2354534
  35. N. Paramanandham and K. Rajendiran, "Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications," Infrared Physics & Technology, vol. 88, pp. 13-22, 2018. https://doi.org/10.1016/j.infrared.2017.11.006
  36. F. V. Moghadam and H. R. Shahdoosti, "A new multifocus image fusion method using contourlet transform," 2017.
  37. Y. Chen and N. Sang, "Attention-based hierarchical fusion of visible and infrared images," Optik - International Journal for Light and Electron Optics, vol. 126, no. 23, pp. 4243-4248, 2015. https://doi.org/10.1016/j.ijleo.2015.08.120
  38. Z. Huang, M. Ding and X. Zhang, "Medical image fusion based on non-subsampled shearlet transform and spiking cortical model," Journal of Medical Imaging & Health Informatics, vol. 7, no. 1, pp. 229-234, 2017. https://doi.org/10.1166/jmihi.2017.2011
  39. Y. Huang, D. Bi and D. Wu, "Infrared and visible image fusion based on different constraints in the non-subsampled shearlet transform domain," Sensors, vol. 18, no. 4, pp. 1169, 2018. https://doi.org/10.3390/s18041169
  40. G. Yang, C. Ikuta, S. Zhang, Y. Uwate, Y. Nishio and Z. Lu, "A novel image fusion algorithm using an nsct and a pcnn with digital filtering," International Journal of Image & Data Fusion, vol. 9, pp. 82-94, 2018. https://doi.org/10.1080/19479832.2017.1384763
  41. B. Cheng, L. Jin and G. Li, "A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive dual-pcnn in nsst domain," Infrared Physics & Technology, vol. 91, pp. 153-163, 2018. https://doi.org/10.1016/j.infrared.2018.04.004