摘要
Recently, the Randomized Kaczmarz algorithm (RK) draws much attention because of its low computational complexity and less requirement on computer memory. Many existing results on analysis focus on the behavior of RK in Euclidean spaces, and typically derive exponential converge rates with the base tending to one, as the condition number of the system increases. The dependence on the condition number largely restricts the application of these estimates. There are also results using relaxation (i.e., small step-sizes tending to zero as the sample size increases) to achieve polynomial convergence rates of RK in Hilbert spaces. In this paper, we prove the weak convergence (with polynomial rates) of RK with constant step-size in Hilbert spaces. As a consequence, the strong convergence of RK in Euclidean spaces is obtained with condition number-free polynomial rates. In the setting with noisy data, we study the relaxation technique and obtain a strong convergence of RK in Hilbert spaces, with the rates arbitrarily close to the minimax optimal ones. We apply the analysis to reproducing kernel-based online gradient descent learning algorithms and improve the state-of-the-art convergence estimate in the literature.
- 单位