DOI QR코드

DOI QR Code

Improved 3D Resolution Analysis of N-Ocular Imaging Systems with the Defocusing Effect of an Imaging Lens

  • Lee, Min-Chul (Department of Computer Science and Electronics, Kyushu Institute of Technology) ;
  • Inoue, Kotaro (Department of Computer Science and Electronics, Kyushu Institute of Technology) ;
  • Cho, Myungjin (Department of Electrical, Electronic, and Control Engineering, IITC, Hankyong National University)
  • Received : 2015.11.04
  • Accepted : 2015.11.28
  • Published : 2015.12.31

Abstract

In this paper, we propose an improved framework to analyze an N-ocular imaging system under fixed constrained resources such as the number of image sensors, the pixel size of image sensors, the distance between adjacent image sensors, the focal length of image sensors, and field of view of image sensors. This proposed framework takes into consideration, for the first time, the defocusing effect of the imaging lenses according to the object distance. Based on the proposed framework, the N-ocular imaging system such as integral imaging is analyzed in terms of depth resolution using two-point-source resolution analysis. By taking into consideration the defocusing effect of the imaging lenses using ray projection model, it is shown that an improved depth resolution can be obtained near the central depth plane as the number of cameras increases. To validate the proposed framework, Monte Carlo simulations are carried out and the results are analyzed.

Keywords

I. INTRODUCTION

Three-dimensional (3D) N-ocular imaging systems are considered a promising technology for capturing 3D information from a 3D scene [1-9]. They consist of the combination of an N-imaging system: either stereo imaging (N=2) or integral imaging (N>>2). In the well-known stereo imaging technique, two image sensors are used. On the other hand, many image sensors are used in a typical integral imaging system. Various types of 3D imaging systems have been analyzed using ray optics and diffraction optics [10-16]. Recently, a method to compare the system performance of such systems under equally-constrained resources was proposed because 3D resolution is dependent on several factors such as the number of sensors, the pixel size, and imaging optics [14,15]. Several constraints, including the number of cameras, total parallax, the total number of pixels, and the pixel size were considered in the calculation of 3D resolution, but the fact that the imaging lenses in the front of sensors produce a defocusing effect according to the object distance was ignored. In practice, this defocusing may prevent system analysis of a real N-ocular imaging system.

In this paper, we propose an improved framework for evaluating the performance of N-ocular imaging systems that takes into consideration the defocusing effect of the imaging lens in the sensor. The analysis is based on two-point-source resolution criteria using a ray projection model from image sensors. The defocusing effect according to the position of point sources is introduced to calculate the depth resolution. To show the usefulness of the proposed frame-work, Monte Carlo simulations were carried out, and the experimental results on depth resolution are presented here.

 

II. PROPOSED METHOD

A typical N-ocular imaging system is shown in Fig. 1. Based on the value of N, the N sensors are distributed at equal intervals laterally. For objective analysis, the system design is considered to satisfy equally constrained resources. In Fig. 1, it is assumed that the total parallax (D), the pixel size (c), and the total number of pixels (K) are fixed. In addition, we assume that the diameter of the imaging lens is identical with the diameter of the sensor. Let the focal length and the diameter of the imaging lens be f and A, respectively. The number of cameras is varied from a stereo system with N=2 (2 cameras) to integral imaging with N>>2 (N cameras) under the N-ocular framework. When N=2 (stereo imaging), the conceptual design of the proposed framework is shown in Fig. 1(a) where the image sensor is composed of K/2 pixels. On the other hand, Fig. 1(b) shows an N-ocular imaging system with N cameras (known as integral imaging). Here it is composed of N cameras with K/N pixels.

Fig. 1.Frameworks for N-ocular imaging systems. (a) N=2 (b) N>>2.

In general, the imaging lens used in the image sensor has a defocusing effect according to the distance of the 3D object, as shown in Fig. 2. We assume that the gap between the image sensor and the imaging lens is set to g. The central depth plane (CDP) is calculated by the lens formula [17].

where zg is the object distance from the imaging lens. We now consider a different position (z2) of the point source outside of the CDP as shown in Fig. 2(b). From the geometry of the optical relationships and the lens formula, the diameter d for defocusing is given by [17]:

where AN is the diameter of the lens in an N-ocular imaging system and z2 is the distance of the object from the CDP.

Fig. 2.Ray geometry of the imaging lens. (a) Point source is located at the central depth plane (CDP). (b) Point source is away from the CDP.

For the N-ocular imaging system as shown in Fig. 1, we calculate the depth resolution using the proposed analysis method. To do so, we utilize two-point-source resolution criteria with spatial ray back-projection from the image sensors to the reconstruction plane. In our analysis, the diameter parameter of the defocusing effect of the imaging lens as given by Eq. (2) is newly added to the analysis process previously described in [14].

The procedure of the proposed analysis based on the resolution of two point sources is shown in Fig. 3. Firstly, we explain in detail the calculation of the depth using two point sources. This basic principle is shown in Fig. 4(a).

Fig. 3.Calculation procedure of 3D resolution using two-point-source resolution analysis.

Fig. 4.(a) Two point sources resolution model for depth resolution. (b) Ray projection for unresolvable depth ranges to calculate depth resolution.

We define the depth resolution as the distance that separates two closely spaced point sources. The separation of two point sources can be calculated by separating each of their point spread functions (PSFs) independently for two adjacent sensor pixels in at leastone image sensor out of the N cameras. As shown in Fig. 4(a), two point sources are assumed to be located along the z axis. We assume that the first point source is located at (x1,z1). The first PSF of one point source is recorded by an image sensor. Note that the recorded beams are pixilated due to the discrete nature of the image sensor. The position of the recorded pixel in the sensor for the PSF of the point source is represented by

where c is the pixel size of the sensor, f is the focal length of the imaging lens, and Pi is the position of the ith imaging lens, while ⎾⏋ is the rounding operator.

Next, we consider the second point source, as shown in Fig. 4(a). We separate the PSF of the second point source in the pixel that registered the first PSF. In this paper, we consider the defocusing effect on the position of the two point sources as given by Eq. (2). In this case, when the center of the second PSF is within the s1i pixel, the unresolvable pixel area is given by

Here, δ is the size of the main lobe of the PSF, which is 1.22λf/AN. Fig. 5 shows the variation of the unresolvable pixel area according to the defocusing effect on the position of the two point sources. When the first point source is located near the CDP, the unresolvable pixel area can be calculated as shown in Fig. 5(a). On the other hand, when the first point source is away from the CDP, it is calculated as shown in Fig. 5(b). Next, we present spatial ray back-projection for all the calculated unresolvable pixel areas to calculate the depth resolution as shown in Fig. 4(b). When the ith unresolvable pixel area is back-projected onto the z axis through its corresponding imaging lens, the projected range, which we call the unresolvable depth range in space lies in the following range:

where

Fig. 5.Calculation of unresolvable pixel area (a) near CDP, (b) out of CDP.

The unresolvable depth ranges associated with all N cameras are calculated for a given point x1. Then, the depth resolution is calculated when at least one image sensor can distinguish two point sources and can be shown to be the common intersection of all unresolvable depth ranges. Then, the depth resolution becomes

 

III. EXPERIMENTS AND RESULTS

In order to statistically compute the depth resolution of Eqs. (5)–(7), we used Monte Carlo simulations. The experimental parameters are shown in Table 1. We first set the gap distance between the sensor and the imaging lens at 50.2 mm, which corresponds to a 12,000 mm CDP. The first point source is placed near the CDP. The position of the second point source is then moved randomly in the longitudinal direction to calculate the depth resolution. Under equally constrained resources, we carried out the simulation of depth resolution. The simulation is repeated for 4,000 trials with different random positions of the point sources where z (the range) is from 11,000 mm to 13,000 mm and x is varied from –100 mm to 100 mm. We averaged all calculated depth resolutions.

Table 1.Experimental parameters for Monte Carlo simulations

Fig. 6 shows the simulation results for depth resolution with different distances of the first point source according to the number of cameras. As the number of cameras increases, the depth resolution decreases. Fig. 7 shows the results according to the distance of the first point source.

Fig. 6.Depth resolution according to the number of cameras.

Fig. 7.Depth resolution according to the object distance and the number of cameras.

Depth resolution is calculated by averaging the common intersection of all unresolvable depth ranges produced from N cameras. Therefore, a larger N may produce little variation, as shown in Fig. 7. Fig. 7 shows that the minimum depth resolution was obtained at z=12,000 mm because the CDP is 12,000 mm in this experiment. As the distance of the first point source moves further from the CDP, the depth resolution worsens. We also investigated the characteristics of depth resolution by changing the pixel size in the image sensors. Fig. 8 presents the results of analysis when the pixel size was varied. It was found that the depth resolution improved when the pixel size fell.

Fig. 8.Calculation results of depth resolution via various pixel sizes.

 

IV. CONCLUSIONS

To conclude, we have presented an improved framework for analyzing N-ocular imaging systems under fixed constrained resources. The proposed analysis included the defocusing effect of the imaging lenses when calculating depth resolution. We have investigated the system performance in terms of depth resolution as a function of sensing parameters such as the number of cameras, the distance of the point sources from each other, the pixel size, and so on. Experimental results reveal that the depth resolution can be improved when the number of sensors is large and the object is located near the CDP. We expect that our improved analysis will be useful to design practical N-ocular imaging systems.

References

  1. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” Journal de Physique Théorique et Appliquée, vol. 7, no. 1, pp. 821-825, 1908.
  2. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” Journal of the Optical Society of America, vol. 58, no. 1, pp. 71-76, 1968. https://doi.org/10.1364/JOSA.58.000071
  3. L. Yang, M. McCormick, and N. Davies, “Discussion of the optics of anew 3-D imaging system,” Applied Optics, vol. 27, no. 21, pp. 4529-4534, 1988. https://doi.org/10.1364/AO.27.004529
  4. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proceedings of the IEEE, vol. 94, no. 3, pp. 591-607, 2006. https://doi.org/10.1109/JPROC.2006.870696
  5. F. Okano, J. Arai, K. Mitani, M. Okui, “Real-time integral imaging based on extremely high resolution video system,” Proceedings of the IEEE, vol. 94, no. 3, pp. 490-501, 2006. https://doi.org/10.1109/JPROC.2006.870687
  6. D. H. Shin and H. Yoo “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Optics Express, vol. 16, no, 12, pp. 8855-8867, 2008. https://doi.org/10.1364/OE.16.008855
  7. R. Martinez-Cuenca, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Progress in 3-D multiperspective display by integral imaging,” Proceedings of the IEEE, vol. 97, no. 6, pp. 1067-1077, 2009. https://doi.org/10.1109/JPROC.2009.2016816
  8. J. H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Optics Express, vol. 16, no. 12, pp. 8800-8813, 2008. https://doi.org/10.1364/OE.16.008800
  9. M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proceedings of the IEEE, vol. 99, no. 4, pp. 556-575, 2011. https://doi.org/10.1109/JPROC.2010.2090114
  10. C. B. Burckhardt, "Optimum parameters and resolution limitation of integral photography,” Journal of the Optical Society of America, vol. 58, no. 1, pp. 71-76, 1968. https://doi.org/10.1364/JOSA.58.000071
  11. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, "Analysis of resolution limitation of integral photography," Journal of the Optical Society of America A, vol. 15, no. 8, pp. 2059-2065, 1998. https://doi.org/10.1364/JOSAA.15.002059
  12. F. Jin, J. S. Jang, and B. Javidi, “Effects of device resolution on three dimensional integral imaging,” Optics Letters, vol. 29, no. 12, pp. 1345-1347, 2004. https://doi.org/10.1364/OL.29.001345
  13. Z. Kavehvash, K. Mehrany, and S. Bagheri, “Optimization of the lens array structure for performance improvement of integral imaging,” Optics Letters, vol. 36, no. 20, pp. 3993-3995, 2011. https://doi.org/10.1364/OL.36.003993
  14. D. Shin, M. Daneshpanah, and B. Javidi, “Generalization of three-dimensional N-ocular imaging systems under fixed resource constraints,” Optics Letters, vol. 37, no. 1, pp. 19-21, 2012. https://doi.org/10.1364/OL.37.000019
  15. D. Shin and B. Javidi, “Resolution analysis of N-ocular imaging systems with tilted image sensors,” Journal of Display Technology, vol. 8, no. 9, pp. 529-533, 2012. https://doi.org/10.1109/JDT.2012.2202090
  16. M. Cho and B. Javidi, “Optimization of 3D integral imaging system parameters,” Journal of Display Technology, vol. 8, no. 6, pp. 357-360, 2012. https://doi.org/10.1109/JDT.2012.2189551
  17. S. Pertuz, D. Puig, and M. A. Garcia, “Analysis of focus measure operators for shape-from-focus,” Pattern Recognition, vol. 46, no. 5, pp. 1415-1432, 2013. https://doi.org/10.1016/j.patcog.2012.11.011