Timezone: »
Spotlight
LASSO with Nonlinear Measurements is Equivalent to One With Linear Measurements
CHRISTOS THRAMPOULIDIS · Ehsan Abbasi · Babak Hassibi
Consider estimating an unknown, but structured (e.g. sparse, lowrank, etc.), signal $x_0\in R^n$ from a vector $y\in R^m$ of measurements of the form $y_i=g_i(a_i^Tx_0)$, where the $a_i$'s are the rows of a known measurement matrix $A$, and, $g$ is a (potentially unknown) nonlinear and random linkfunction. Such measurement functions could arise in applications where the measurement device has nonlinearities and uncertainties. It could also arise by design, e.g., $g_i(x)=sign(x+z_i)$, corresponds to noisy 1bit quantized measurements. Motivated by the classical work of Brillinger, and more recent work of Plan and Vershynin, we estimate $x_0$ via solving the GeneralizedLASSO, i.e., $\hat x=\arg\min_{x}\yAx_0\_2+\lambda f(x)$ for some regularization parameter $\lambda >0$ and some (typically nonsmooth) convex regularizer $f$ that promotes the structure of $x_0$, e.g. $\ell_1$norm, nuclearnorm. While this approach seems to naively ignore the nonlinear function $g$, both Brillinger and Plan and Vershynin have shown that, when the entries of $A$ are iid standard normal, this is a good estimator of $x_0$ up to a constant of proportionality $\mu$, which only depends on $g$. In this work, we considerably strengthen these results by obtaining explicit expressions for $\\hat x\mu x_0\_2$, for the regularized GeneralizedLASSO, that are asymptotically precise when $m$ and $n$ grow large. A main result is that the estimation performance of the Generalized LASSO with nonlinear measurements is asymptotically the same as one whose measurements are linear $y_i=\mu a_i^Tx_0+\sigma z_i$, with $\mu=E[\gamma g(\gamma)]$ and $\sigma^2=E[(g(\gamma)\mu\gamma)^2]$, and, $\gamma$ standard normal. The derived expressions on the estimation performance are the firstknown precise results in this context. One interesting consequence of our result is that the optimal quantizer of the measurements that minimizes the estimation error of the LASSO is the celebrated LloydMax quantizer.
Author Information
CHRISTOS THRAMPOULIDIS (Caltech)
Ehsan Abbasi (Caltech)
Babak Hassibi (Caltech)
More from the Same Authors

2020 Poster: Logarithmic Regret Bound in Partially Observable Linear Dynamical Systems »
Sahin Lale · Kamyar Azizzadenesheli · Babak Hassibi · Anima Anandkumar 
2019 Poster: Universality in Learning from Linear Measurements »
Ehsan Abbasi · Fariborz Salehi · Babak Hassibi 
2019 Poster: The Impact of Regularization on Highdimensional Logistic Regression »
Fariborz Salehi · Ehsan Abbasi · Babak Hassibi 
2018 Poster: Learning without the Phase: Regularized PhaseMax Achieves Optimal Sample Complexity »
Fariborz Salehi · Ehsan Abbasi · Babak Hassibi 
2017 Poster: A Universal Analysis of LargeScale Regularized Least Squares Solutions »
Ashkan Panahi · Babak Hassibi 
2017 Spotlight: A Universal Analysis of LargeScale Regularized Least Squares Solutions »
Ashkan Panahi · Babak Hassibi 
2016 Poster: Fundamental Limits of BudgetFidelity Tradeoff in Label Crowdsourcing »
Farshad Lahouti · Babak Hassibi 
2015 Poster: LASSO with Nonlinear Measurements is Equivalent to One With Linear Measurements »
CHRISTOS THRAMPOULIDIS · Ehsan Abbasi · Babak Hassibi