Ai, M., Wang, F., Yu, J., & Zhang, H. (2020).
Optimal subsampling for large-scale quantile regression.
Journal of Complexity.
访问链接AbstractTo deal with massive data sets, subsampling is known as an effective method which can significantly reduce computational costs in estimating model parameters. In this article, an efficient subsampling method is developed for large-scale quantile regression via Poisson sampling framework, which can solve the memory constraint problem imposed by big data. Under some mild conditions, large sample properties for the estimator involving the weak and strong consistencies, and asymptotic normality are established. Furthermore, the optimal subsampling probabilities are derived according to the A-optimality criterion. It is shown that the estimator based on the optimal subsampling asymptotically achieves a smaller variance than that by the uniform random subsampling. The proposed method is illustrated and evaluated through numerical analyses on both simulated and real data sets.
Huang, H., Gao, Y., Zhang, H., & Li, B. (2020).
Weighted Lasso Estimates for Sparse Logistic Regressions: Non-asymptotic Properties with Measurement Error.
Acta Mathematica Scientia.
AbstractWhen we are interested in high-dimensional system and focus on classification performance, the $\ell_{1}$-penalized logistic regression is becoming important and popular. However, the Lasso estimates could be problematic when penalties of different coefficients are all the same and not related to the data. We proposed two types of weighted Lasso estimates depending on covariates by the McDiarmid inequality. Given sample size $n$ and dimension of covariates $p$, the finite sample behavior of our proposed methods with a diverging number of predictors is illustrated by non-asymptotic oracle inequalities such as $\ell_{1}$-estimation error and squared prediction error of the unknown parameters. We compare the performance of our methods with former weighted estimates on simulated data, then apply these methods to do real data analysis.
Yu, J., Wang, H., Ai, M., & Zhang, H. (2020).
Optimal Distributed Subsampling for Maximum Quasi-Likelihood Estimators with Massive Data.
Journal of the American Statistical Association.
访问链接AbstractNonuniform subsampling methods are effective to reduce computational burden and maintain estimation efficiency for massive data. Existing methods mostly focus on subsampling with replacement due to its high computational efficiency. If the data volume is so large that nonuniform subsampling probabilities cannot be calculated all at once, then subsampling with replacement is infeasible to implement. This paper solves this problem using Poisson subsampling. We first derive optimal Poisson subsampling probabilities in the context of quasi-likelihood estimation under the A- and L-optimality criteria. For a practically implementable algorithm with approximated optimal subsampling probabilities, we establish the consistency and asymptotic normality of the resultant estimators. To deal with the situation that the full data are stored in different blocks or at multiple locations, we develop a distributed subsampling framework, in which statistics are computed simultaneously on smaller partitions of the full data. Asymptotic properties of the resultant aggregated estimator are investigated. We illustrate and evaluate the proposed strategies through numerical experiments on simulated and real data sets.