References
- D.C. Dennett, The intentional stance. Cambridge, MA: MIT Press, 1987.
- G. Gergely, Z. Nadasdy, G. Csibra., and S. Biro., Taking the intentional stance at 12 months of age., Cognition, Vol. 56, pp. 165- 193, 1995. https://doi.org/10.1016/0010-0277(95)00661-H
- Y. Shi and R. Crawfis, Optimal Cover Placement against Static Enemy Positions, in Proc. of the 8th International Conference on Foundations of Digital Games (FDG), pp. 109-116, 2013.
- J. Tremblay, P.A. Torres, N. Rikovitch, and C. Verbrugge, An Exploration Tool For Predicting Stealthy Behaviour, in Proc. of the 2013 AIIDE workshop on Artificial Intelligence in the Game Design Process, 2013.
- J. Tremblay, P.A. Torres, and C. Verbrugge, Measuring Risk in Stealth Games, in Proc. of the 9th International conference on foundations of Digital Games (FDG), 2014.
- Q. Xu, J. Tremblay, and C. Verbrugge, Generative Methods for Guard and Camera Placement in Stealth Games, in Proc. of the Tenth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2014.
- Q. Xu, J. Tremblay, and C. Verbrugge, Procedural Guard Placement for Stealth Games, in Proc. of the 5th workshop on Procedural Content Generation (PCG), 2014.
- J. Tremblay, P.A. Torres, and C. Verbrugge, An Algorithmic Approach to Analyzing Combat and Stealth Games, in Proc. of the International Conference on Computational Intelligence and Games (CIG), 2014.
- B.Q. Huang, G. Y. Cao, and M. Guo, Reinforcement Learning Neural Network to the Problem of Autonomous Mobile Robot Obstacle Avoidance, in Proc. of 2005 International Conference on Machine Learning and Cybernetics (ICMLC), 2005
- M. Humphrys, Action Selection Methods using Reinforcement Learning, in PhD Thesis, University of Cambridge, 1997.
- J. Togelius, S. Karakovskiy, J. Koutnk, and J. Schmidhuber, Super mario evolution, in Proc. of the IEEE Symposium on Computational Intelligence and Games (CIG), pp. 156-161, 2009.
- Z. Buk, J. Koutnik, and M. snorek, NEAT in HyperNEAT substituted with genetic programming, in Proc. of the International Conference on Adaptive and Natural Computing Algorithms (IICANNGA), 2009.
- V. Minih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, Playing Atari with Deep Reinforcement Learning, in Neural Information Processing Systems (NIPS) Deep Learning Workshop, 2013.
- M.G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling, The arcade learning environment: An evaluation platform for general agents, in Jounal of Artificial Intelligence Research (JAIR), Vol. 47, pp. 253-279, 2013. https://doi.org/10.1613/jair.3912
- M.J.L. Boada, R. Barber, and M.A. Salichs, Visual Approach Skill for a Mobile Robot using Learning and Fusion of Simple Skills, in Robotics and Autonomous Systems, Vol. 38, pp. 157-170, 2002. https://doi.org/10.1016/S0921-8890(02)00165-3
- C.J.C.H. Watkins and P. Dayan, Q-Learning, in Machine Learning, Vol. 8, pp. 279-292, 1992.
- A. Onat, Q-learning with recurrent neural networks as a controller for the inverted pendulum problem, in Proc. of the Fifth International Conference on Neural Inforamtion, pp. 837-840, 1998.
- L.J. Lin, Reinforcement Learning for Robots using Neural Networks, in PhD thesis, Carnegie Mellon University, School of Computer Science, 1993.
- E. Cervera and A.P.D. Pobil, Sensor-based Learning for Practical Planning of fine Motions in Robotics, in Information Sciences, Vol. 145, pp. 147-168, 2002. https://doi.org/10.1016/S0020-0255(02)00228-1
- R. Gavin and N. Mahesan, On-line Q-learning using Connectionist systems, in Technical Report, No. 166, University of Cambridge Engineering Department, 1994.