摘要

This study examines the computation of the high-dimensional zero-norm penalized quantile regression estimator, defined as the global minimizer of the zeronorm penalized check loss function. To seek a desirable approximation to the estimator, we reformulate this NP-hard problem as an equivalent augmented Lipschitz optimization problem. Then, we exploit its coupled structure to propose a multistage convex relaxation approach (MSCRA PPA), each step of which solves inexactly a weighted `1-regularized check loss minimization problem using a proximal dual semismooth Newton method. Under a restricted strong convexity condition, we provide a theoretical guarantee for the MSCRA PPA by establishing the error bound of each iterate to the true estimator and the rate of linear convergence in a statistical sense. Numerical comparisons using synthetic and real data show that the MSCRA PPA exhibits comparable or better estimation performance and requires much less CPU time.

全文