On a feasible descent algorithm for solving min-max problems
AbstractThe paper presents on an example of max functions a new algorithm for solving nondifferentible optimization problems when full knowledge of subderivative is available at each point. We generate an easy to handle representation of the improvement cone with the computational cost equal to the calculation of the inverse of one matrix. Due to that we find at each iteration a descent direction. This is in contrary to the usual nondifferentiable methods which make use of only one subgradient at each point and solve a QP direction search problem at each step. This is much more computationally expensive and does not ensure that the direction found is descent. The results of numerical experiments on the sequential machine are presented. They have proved that the current algorithm works quite good on convex min-max problems. The case of nonconvex problems requires careful elaboration of a new minimization algorithm. We present also an idea of a parallel method of bundle type making use of our mechanism of feasible directions generation.
|Book||Vulkov Lubin, Wasniewski Jerzy , Yalamov Plamen (eds.): Numerical Analysis and Its Applications, Lecture Notes In Computer Science, no. 1196, 1997, Springer Berlin Heidelberg, ISBN 978-3-540-62598-8, 978-3-540-68326-1|
|Keywords in English||Algorithm Analysis and Problem Complexity, numerical analysis, Numeric Computing|
|Publication indicators||= 0|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.