摘要

The efficiency of deep convolutional neural networks (DCNNs) has been demonstrated empirically in many practical applications. In this paper, we establish a theory for approximating functions from Korobov spaces by DCNNs. It verifies rigorously the efficiency of DCNNs in approximating functions of many variables with some variable structures and their abilities in overcoming the curse of dimensionality.