Learning the Covariance Dynamics of a Large-Scale Environment for Informative Path Planning of Unmanned Aerial Vehicle Sensors

  • Park, Soo-Ho (Jump Trading) ;
  • Choi, Han-Lim (Division of Aerospace Engineering, KAIST) ;
  • Roy, Nicholas (Department of Aeronautics and Astronautics, Massachusetts Institute of Technology) ;
  • How, Jonathan P. (Department of Aeronautics and Astronautics, Massachusetts Institute of Technology)
  • Published : 2010.12.15


This work addresses problems regarding trajectory planning for unmanned aerial vehicle sensors. Such sensors are used for taking measurements of large nonlinear systems. The sensor investigations presented here entails methods for improving estimations and predictions of large nonlinear systems. Thoroughly understanding the global system state typically requires probabilistic state estimation. Thus, in order to meet this requirement, the goal is to find trajectories such that the measurements along each trajectory minimize the expected error of the predicted state of the system. The considerable nonlinearity of the dynamics governing these systems necessitates the use of computationally costly Monte-Carlo estimation techniques, which are needed to update the state distribution over time. This computational burden renders planning to be infeasible since the search process must calculate the covariance of the posterior state estimate for each candidate path. To resolve this challenge, this work proposes to replace the computationally intensive numerical prediction process with an approximate covariance dynamics model learned using a nonlinear time-series regression. The use of autoregressive time-series featuring a regularized least squares algorithm facilitates the learning of accurate and efficient parametric models. The learned covariance dynamics are demonstrated to outperform other approximation strategies, such as linearization and partial ensemble propagation, when used for trajectory optimization, in terms of accuracy and speed, with examples of simplified weather forecasting.


  1. Berliner, L. M., Lu, Z. Q., and Snyder, C. (1999). Statistical design for adaptive weather observations. Journal of the Atmospheric Sciences, 56, 2536-2552.<2536:SDFAWO>2.0.CO;2
  2. Bishop, C. H., Etherton, B. J., and Majumdar, S. J. (2001). Adaptive sampling with the ensemble transform Kalman filter Part I: Theoretical aspects. Monthly Weather Review, 129, 420-436.<0420:ASWTET>2.0.CO;2
  3. Box, G. E. P., Jenkins, G. M., and Reinsel, G. C. (1994). Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall.
  4. Choi, H. L., How, J., and Hansen, J. (2008). Algorithm and sensitivity analysis of information-theoretic ensemble-based observation targeting. Proceedings of AMS Annual Meeting, New Orleans, LA.
  5. Choi, H. L. and How, J. P. (2010a). Coordinated targeting of mobile sensor networks for ensemble forecast improvement. IEEE Sensors Journal, in press.
  6. Choi, H. L. and How, J. P. (2010b). Efficient targeting of sensor networks for large-scale systems. IEEE Transactions on Control Systems Technology, in press.
  7. Evensen, G. and Van Leeuwen, P. J. (1996). Assimilation of geosat altimeter data for the agulhas current using the ensemble kalman filter with a quasigeostrophic model. Monthly Weather Review, 124, 85-96.<0085:AOGADF>2.0.CO;2
  8. Evgeniou, T., Pontil, M., and Poggio, T. (2000). Regularization Networks and Support Vector Machines. Advances in Computational Mathematics, 13, 1-50.
  9. Furrer, R. and Bengtsson, T. (2007). Estimation of highdimensional prior and posterior covariance matrices in Kalman filter variants. Journal of Multivariate Analysis, 98, 227-255.
  10. Guyon, I. and Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157-1182.
  11. Houtekamer, P. L. and Mitchell, H. L. (2001). A sequential ensemble Kalman filter for atmospheric data assimilation. Monthly Weather Review, 129, 123-137.<0123:ASEKFF>2.0.CO;2
  12. Krause, A., Singh, A., and Guestrin, C. (2008). Nearoptimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 9, 235-284.
  13. Leutbecher, M. (2003). A reduced rank estimate of forecast error variance changes due to intermittent modifications of the observing network. Journal of the Atmospheric Sciences, 60, 729-742.<0729:ARREOF>2.0.CO;2
  14. Lorenz, E. N. (2005). Designing chaotic models. Journal of the Atmospheric Sciences, 62, 1574-1587.
  15. Lorenz, E. N. and Emanuel, K. A. (1998). Optimal sites for supplementary weather observations: Simulation with a small model. Journal of the Atmospheric Sciences, 55, 399-414.<0399:OSFSWO>2.0.CO;2
  16. Morss, R. E., Emanuel, K. A., and Snyder, C. (2001). Idealized adaptive observation strategies for improving numerical weather prediction. Journal of the Atmospheric Sciences, 58, 210-232.<0210:IAOSFI>2.0.CO;2
  17. Ott, E., Hunt, B. R., Szunyogh, I., Zimin, A. V., Kostelich, E. J., Corazza, M., Kalnay, E., Patil, D. J., and Yorke, J. A. (2004). A local ensemble Kalman filter for atmospheric data assimilation. Tellus, Series A: Dynamic Meteorology and Oceanography, 56, 415-428.
  18. Park, S. (2008). Learning for Informative Path Planning. PhD Thesis, Massachusetts Institute of Technology.
  19. Sauer, T., Yorke, J. A., and Casdagli, M. (1991). Embedology. Journal of Statistical Physics, 65, 579-616.
  20. Whitaker, J. S. and Hamill, T. M. (2002). Ensemble data assimilation without perturbed observations. Monthly Weather Review, 130, 1913-1924.<1913:EDAWPO>2.0.CO;2

Cited by

  1. Compensating for speaker or lexical variabilities in speech for emotion recognition vol.57, 2014,