Here is today's batch, enjoy!

Randomized algorithms for matrices and data by Michael Mahoney. The abstract reads:

Randomized algorithms for matrices and data by Michael Mahoney. The abstract reads:

Randomized algorithms for very large matrix problems—e.g., random sampling and random projection algorithms for least-squares and low-rank matrix approximation—have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, largely since matrices are popular structures with which to model data drawn from a wide range of application domains. Although this work had its origins within theoretical computer science, where researchers were interested in proving worst-case bounds, i.e., bounds without any assumptions at all on the input data, researchers from numerical linear algebra, statistics, applied mathematics, data analysis, and machine learning, as well as domain scientists have subsequently extended and applied these methods in important ways. Although this has been great for the development of the area and for the “technology transfer” of theoretical ideas into practical applications, this interdisciplinarity has thus far sometimes obscured the underlying simplicity and generality of the core ideas. This review will provide a detailed overview of recent work on randomized algorithms for matrix problems, with an emphasis on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. This connection arises naturally when one explicitly decouples the eﬀect of randomization in these matrix algorithms from the underlying linear algebraic structure. This decoupling also permits much ﬁner control in the application of randomization, as well as the easier exploitation of domain knowledge

Verifiable conditions of `1-recovery for sparse signals with sign restrictions by Anatoli Juditsky, Fatma Kılınc Karzan, Arkadi Nemirovski. The abstract reads:

We propose necessary and su cient conditions for a sensing matrix to be \s-semigood" { to allow for exact `1-recovery of sparse signals with at most s nonzero entries under sign restrictions on part of the entries. We express error bounds for imperfect `1-recovery in terms of the characteristics underlying these conditions. These characteristics, although di fficult to evaluate, lead to veri fiable su fficient conditions for exact sparse `1-recovery and thus e ciently computable upper bounds on those s for which a given sensing matrix is s-semigood. We examine the properties of proposed veri fiable su cient conditions, describe their limits of performance and provide numerical examples comparing them with other veri fiable conditions from the literature.

Sparse Non Gaussian Component Analysis by Semidefinite Programming by Anatoli Juditsky, Elmar Diederichs, Arkadi Nemirovski, Vladimir Spokoiny. The abstract reads:

Sparse non-Gaussian component analysis (SNGCA) is an unsupervised method of extracting a linear structure from a high dimensional data based on estimating a low-dimensional non-Gaussian data component. In this paper we discuss a new approach to direct estimation of the projector on the target space based on semide nite programming which improves the method sensitivity to a broad variety of deviations from normality.We also discuss the procedures which allows to recover the structure when its e ffective dimension is unknown.

Nuit Blanche has already covered these papers but they seem to have been updated:

Credit photo: NASA/JPL/University of Arizona, Lots of Layers in Terby Crater, ESP_021942_1520

## No comments:

Post a Comment