DOI QR코드

DOI QR Code

Absolute Depth Estimation Based on a Sharpness-assessment Algorithm for a Camera with an Asymmetric Aperture

  • Kim, Beomjun (School of Electronic and Electrical Engineering, Kyungpook National University) ;
  • Heo, Daerak (School of Electronic and Electrical Engineering, Kyungpook National University) ;
  • Moon, Woonchan (School of Electronic and Electrical Engineering, Kyungpook National University) ;
  • Hahn, Joonku (School of Electronic and Electrical Engineering, Kyungpook National University)
  • Received : 2021.06.14
  • Accepted : 2021.07.29
  • Published : 2021.10.25

Abstract

Methods for absolute depth estimation have received lots of interest, and most algorithms are concerned about how to minimize the difference between an input defocused image and an estimated defocused image. These approaches may increase the complexity of the algorithms to calculate the defocused image from the estimation of the focused image. In this paper, we present a new method to recover depth of scene based on a sharpness-assessment algorithm. The proposed algorithm estimates the depth of scene by calculating the sharpness of deconvolved images with a specific point-spread function (PSF). While most depth estimation studies evaluate depth of the scene only behind a focal plane, the proposed method evaluates a broad depth range both nearer and farther than the focal plane. This is accomplished using an asymmetric aperture, so the PSF at a position nearer than the focal plane is different from that at a position farther than the focal plane. From the image taken with a focal plane of 160 cm, the depth of object over the broad range from 60 to 350 cm is estimated at 10 cm resolution. With an asymmetric aperture, we demonstrate the feasibility of the sharpness-assessment algorithm to recover absolute depth of scene from a single defocused image.

Keywords

Acknowledgement

This research was supported by 'The Cross-Ministry Giga Korea Project' grant funded by Korea government (MSIT) (No. 1711116979, Development of Telecommunications Terminal with Digital Holographic Table-top Display).

References

  1. J. H. Elder and S. W. Zucker, "Local scale control for edge detection and blur estimation," IEEE Trans. Pattern Anal. Mach. Intell. 20, 699-716 (1998). https://doi.org/10.1109/34.689301
  2. M. Subbarao, T.-C. Wei, and G. Surya, "Focused image recovery from two defocused images recorded with different camera settings," IEEE Trans. Image Process. 4, 1613-1628, (1995). https://doi.org/10.1109/tip.1995.8875998
  3. C. Swain and T. Chen, "Defocus-based image segmentation," in Proc. International Conference on Acoustics, Speech, and Signal Processing-ICASSP (Detroit, MI, USA, May. 1995), pp. 2403-2406.
  4. M. Subbarao and G. Surya, "Depth from defocus: a spatial domain approach," Int. J. Comput. Vis. 13, 271-294 (1994). https://doi.org/10.1007/BF02028349
  5. Z. Djemel and D. Francois, "Depth from defocus estimation in spatial domain," Comput. Vis. Image Underst. 81, 143-165 (2001). https://doi.org/10.1006/cviu.2000.0899
  6. A. P. Pentland, "A new sense for depth of field," IEEE Trans. Pattern Anal. Mach. Intell. 9, 523-531 (1987). https://doi.org/10.1109/TPAMI.1987.4767940
  7. S. Bae and F. Durand, "Defocus magnification," Comput. Graph. Forum 26, 571-579 (2007). https://doi.org/10.1111/j.1467-8659.2007.01080.x
  8. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Trans. Graph. 26, 70-es (2007). https://doi.org/10.1145/1276377.1276464
  9. S. Zhou and S. Terence, "Defocus map estimation from a single image," Pattern Recognit. 44, 1852-1858 (2011). https://doi.org/10.1016/j.patcog.2011.03.009
  10. S. H. Lai, C. W. Fu, and S. Chang, "A generalized depth estimation algorithm with a single image," IEEE Trans. Pattern Anal. Mach. Intell. 14, 405-411 (1992). https://doi.org/10.1109/34.126803
  11. C. Chen and Y. Chen, "Recovering depth from a single image using spectral energy of the defocused step edge gradient," in Proc. 18th IEEE International Conference on Image Processing-ICIP (Brussels, Belgium, Sept. 2011), pp. 1981-1984.
  12. C. Zhou, S. Lin, and S. K. Nayar, "Coded aperture pairs for depth from defocus," in Proc. IEEE 12th International Conference on Computer Vision-ICCV (Kyoto, Japan, Oct. 2009), pp. 325-332.
  13. W. H. Richardson, "Bayesian-based iterative method of image restoration," J. Opt. Soc. America 62, 55-59 (1972). https://doi.org/10.1364/JOSA.62.000055
  14. L. B. Lucy, "An iterative technique for the rectification of observed distributions," Astron. J. 79, 745-754 (1974). https://doi.org/10.1086/111605
  15. J. Kumar, F. Chen, and D. Doermann, "Sharpness estimation for document and scene images," in Proc. 21st International Conference on Pattern Recognition-ICPR2012 (Tsukuba, Japan, Nov. 2012), pp. 3292-3295.
  16. R. Nock and F. Nielsen, "Statistical region merging," IEEE Trans. Pattern Anal. Mach. Intell. 26, 1452-1458 (2004). https://doi.org/10.1109/TPAMI.2004.110