CONVERGENCE OF A CLASS OF NONMONOTONE DESCENT METHODS FOR KURDYKA--\LOJASIEWICZ OPTIMIZATION PROBLEMS*
摘要
This note is concerned with a class of nonmonotone descent methods for minimizing a proper lower semicontinuous Kurdyka--\Lojasiewicz (KL) function \Phi , which generates a sequence satisfying a nonmonotone decrease condition and a relative error tolerance. Under suitable assumptions, we prove that the whole sequence converges to a limiting critical point of \Phi , and, when \Phi is a KL function of exponent \theta \in [0, 1), the convergence admits a linear rate if \theta \in [0, 1/2] and a sublinear rate associated to \theta if \theta \in (1/2, 1). These assumptions are shown to be sufficient and necessary if in addition \Phi is weakly convex on a neighborhood of the set of critical points. Our results not only extend the convergence results of monotone descent methods for KL optimization problems but also resolve the convergence problem on the iterate sequence generated by a class of nonmonotone line search algorithms for nonconvex and nonsmooth problems.
