DOI QR코드

DOI QR Code

A review of missing video frame estimation techniques for their suitability analysis in NPP

  • Chaubey, Mrityunjay (Computer Science, Centre for Interdisciplinary Mathematical Sciences, Institute of Science, Banaras Hindu University Varanasi) ;
  • Singh, Lalit Kumar (Department of Computer Science & Engineering, IIT (BHU)) ;
  • Gupta, Manjari (Computer Science, Centre for Interdisciplinary Mathematical Sciences, Institute of Science, Banaras Hindu University Varanasi)
  • Received : 2021.06.13
  • Accepted : 2021.10.10
  • Published : 2022.04.25

Abstract

The application of video processing techniques are useful for the safety of nuclear power plants by tracking the people online on video to estimate the dose received by staff during work in nuclear plants. Nuclear reactors remotely visually controlled to evaluate the plant's condition using video processing techniques. Internal reactor components should be frequently inspected but in current scenario however involves human technicians, who review inspection videos and identify the costly, time-consuming and subjective cracks on metallic surfaces of underwater components. In case, if any frame of the inspection video degraded/corrupted/missed due to noise or any other factor, then it may cause serious safety issue. The problem of missing/degraded/corrupted video frame estimation is a challenging problem till date. In this paper a systematic literature review on video processing techniques is carried out, to perform their suitability analysis for NPP applications. The limitation of existing approaches are also identified along with a roadmap to overcome these limitations.

Keywords

References

  1. Mark R. Banham, Aggelos K. Katsaggelos, Digital image restoration, IEEE Signal Process. Mag. 14 (1997) 24-41, 2. https://doi.org/10.1109/79.581363
  2. B. Yan, H. Gharavi, A hybrid frame concealment algorithm for H.264/AVC, in: IEEE Transactions on Image Processing, vol. 19, Jan. 2010, pp. 98-107, 1. https://doi.org/10.1109/TIP.2009.2032311
  3. C. Wang, L. Zhang, Y. He, Y. Tan, Frame rate up-conversion using trilateral filtering, in: IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, June 2010, pp. 886-893, 6. https://doi.org/10.1109/TCSVT.2010.2046057
  4. K. Lee, C. Lee, High quality deinterlacing using content-adaptive vertical temporal filtering, in: IEEE Transactions on Consumer Electronics, vol. 56, November 2010, pp. 2469-2474, 4. https://doi.org/10.1109/TCE.2010.5681129
  5. E. Maani, A.K. Katsaggelos, Unequal error protection for robust streaming of scalable video over packet lossy networks, in: IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, March 2010, pp. 407-416, 3. https://doi.org/10.1109/TCSVT.2009.2035846
  6. N.C. Tang, C. Hsu, C. Su, T.K. Shih, H.M. Liao, Video inpainting on digitized vintage films via maintaining spatiotemporal continuity, in: IEEE Transactions on Multimedia, vol. 13, Aug. 2011, pp. 602-614, 4. https://doi.org/10.1109/TMM.2011.2112642
  7. H. Liu, R. Xiong, D. Zhao, S. Ma, W. Gao, Multiple hypotheses bayesian frame rate up-conversion by adaptive fusion of motion-compensated interpolations, in: IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, Aug. 2012, pp. 1188-1198, 8. https://doi.org/10.1109/TCSVT.2012.2197081
  8. Kwok-Wai Hung, Wan-Chi Siu, Computationally scalable adaptive image interpolation algorithm using maximum-likelihood denoising for real-time applications, J. Electron. Imag. 22 (2013), 043006, 4. https://doi.org/10.1117/1.JEI.22.4.043006
  9. H. Zhang, J. Yang, Y. Zhang, T.S. Huang, Image and video restorations via nonlocal kernel regression, in: IEEE Transactions on Cybernetics, vol. 43, June 2013, pp. 1035-1046, 3. https://doi.org/10.1109/TSMCB.2012.2222375
  10. F. Vedadi, S. Shirani, De-interlacing using nonlocal costs and markov-chain-based estimation of interpolation methods, in: IEEE Transactions on Image Processing, vol. 22, April 2013, pp. 1559-1572, 4. https://doi.org/10.1109/TIP.2012.2233488
  11. G. Na, K. Shim, K. Moon, S.G. Kong, E. Kim, J. Lee, Frame-based recovery of corrupted video files using video codec specifications, in: IEEE Transactions on Image Processing, vol. 23, Feb. 2014, pp. 517-526, 2. https://doi.org/10.1109/TIP.2013.2285625
  12. T.L. Lin, T.E. Chang, G.X. Huang, C.C. Chou, U.S. Thakur, Improved interview video error concealment on whole frame packet loss, J. Vis. Commun. Image Represent. 25 (2014) 1811-1822, 8. https://doi.org/10.1016/j.jvcir.2014.09.006
  13. C. Hsu, L. Kang, C. Lin, Temporally coherent superresolution of textured video via dynamic texture synthesis, in: IEEE Transactions on Image Processing, vol. 24, March 2015, pp. 919-931, 3. https://doi.org/10.1109/TIP.2014.2387416
  14. Jing Ge, et al., A spatiotemporal super-resolution algorithm for a hybrid stereo video system, Signal, Imag. Video Proc. 10 (2016) 559-566, 3. https://doi.org/10.1007/s11760-015-0774-4
  15. Z. Xu, Q. Zhang, Z. Cao, C. Xiao, Video background completion using motion-guided pixel assignment optimization, in: IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, Aug. 2016, pp. 1393-1406, 8. https://doi.org/10.1109/TCSVT.2015.2437111
  16. Pierre-Henri Conze, et al., Multi-reference combinatorial strategy towards longer long-term dense motion estimation, Comput. Vis. Image Understand. 150 (2016) 66-80. https://doi.org/10.1016/j.cviu.2016.04.013
  17. W. Hu, D. Tao, W. Zhang, Y. Xie, Y. Yang, The twist tensor nuclear norm for video completion, in: IEEE Transactions on Neural Networks and Learning Systems, vol. 28, Dec. 2017, pp. 2961-2973, 12. https://doi.org/10.1109/TNNLS.2016.2611525
  18. D.M.M. Rahaman, M. Paul, Virtual view synthesis for free viewpoint video and multiview video compression using Gaussian mixture modelling, in: IEEE Transactions on Image Processing, vol. 27, March 2018, pp. 1190-1201, 3. https://doi.org/10.1109/TIP.2017.2772858
  19. B.E. Moore, C. Gao, R.R. Nadakuditi, Panoramic robust PCA for Foreground-background separation on noisy, free-motion camera video, in: IEEE Transactions on Computational Imaging, vol. 5, June 2019, pp. 195-211, 2. https://doi.org/10.1109/tci.2019.2891389
  20. M. Saqib, S.D. Khan, N. Sharma, M. Blumenstein, Crowd counting in low-resolution crowded scenes using region-based deep convolutional neural networks, in: IEEE Access, vol. 7, 2019, pp. 35317-35329. https://doi.org/10.1109/access.2019.2904712
  21. A. Paliwal, N.K. Kalantari, Deep slow motion video reconstruction with hybrid imaging system, in: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, 1 July 2020, pp. 1557-1569, 7. https://doi.org/10.1109/tpami.2020.2987316