Dynamic programming with ARMA, Markov, and NARMA models vs. Q-learning-case study

Jarosław Piotr Chrobak , A Pacut , Andrzej Karbowski

Abstract

Two approaches to control policy synthesis for unknown systems are investigated. The indirect approach is based on the identification of ARMA, NARMA, or Markov chain models, and applications of dynamic programming to these models with or without the use of a certainty equivalence principle. The direct approach is represented by Q-learning, with the lookup table or with the use of radial basis function approximation. We implemented both methods to optimization of a stock portfolio and tested on the Warsaw stock exchange data
Author Jarosław Piotr Chrobak (FEIT / AK)
Jarosław Piotr Chrobak,,
- The Institute of Control and Computation Engineering
, A Pacut
A Pacut,,
-
, Andrzej Karbowski (FEIT / AK)
Andrzej Karbowski,,
- The Institute of Control and Computation Engineering
Pages265-270 vol.3
Book IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, 2000, vol. 3, 2000
Keywords in EnglishARMA model, autoregressive moving average processes, computer aided software engineering, control system synthesis, dynamic programming, function approximation, hidden Markov models, investment, investments, learning (artificial intelligence), lookup table, Markov chain models, NARMA models, optimization, optimization methods, portfolio, probability, Q-learning, radial basis function networks, radial basis function neural networks, Share prices, stock market, stock markets, table lookup, Testing
DOIDOI:10.1109/IJCNN.2000.861314
Score (nominal)0
Score sourcejournalList
Publication indicators WoS Citations = 1
Citation count*2 (2016-05-16)
Cite
Share Share

Get link to the record


* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.
Back
Confirmation
Are you sure?