Dynamic programming with ARMA, Markov, and NARMA models vs. Q-learning-case study
Jarosław Piotr Chrobak , A Pacut , Andrzej Karbowski
AbstractTwo approaches to control policy synthesis for unknown systems are investigated. The indirect approach is based on the identification of ARMA, NARMA, or Markov chain models, and applications of dynamic programming to these models with or without the use of a certainty equivalence principle. The direct approach is represented by Q-learning, with the lookup table or with the use of radial basis function approximation. We implemented both methods to optimization of a stock portfolio and tested on the Warsaw stock exchange data
|Book||IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, 2000, vol. 3, 2000|
|Keywords in English||ARMA model, autoregressive moving average processes, computer aided software engineering, control system synthesis, dynamic programming, function approximation, hidden Markov models, investment, investments, learning (artificial intelligence), lookup table, Markov chain models, NARMA models, optimization, optimization methods, portfolio, probability, Q-learning, radial basis function networks, radial basis function neural networks, Share prices, stock market, stock markets, table lookup, Testing|
|Publication indicators||= 1|
|Citation count*||2 (2016-05-16)|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.