摘要
We consider supervised learning in a reproducing kernel Hilbert space (RKHS) using random features. We show that the optimal rate is obtained under suitable regularity conditions, and at the same time improving on the existing bounds on the number of random features required. As a straightforward extension, distributed learning in the simple setting of one-shot communication is also considered that achieves the same optimal rate.