Has Compressive Sensing peaked ? This cannot be the impression you get from reading today's contributions.
Dror let me know of his new paper: Fault Identification via Non-parametric Belief Propagation by Danny Bickson, Dror Baron, Alexander Ihler, Harel Avissar, Danny Dolev. The abstract reads:
Dror let me know of his new paper: Fault Identification via Non-parametric Belief Propagation by Danny Bickson, Dror Baron, Alexander Ihler, Harel Avissar, Danny Dolev. The abstract reads:
We consider the problem of identifying a pattern of faults from a set of noisy linear measurements. Unfortunately, maximum a posteriori probability estimation of the fault pattern is computationally intractable. To solve the fault identification problem, we propose a non-parametric belief propagation approach. We show empirically that our belief propagation solver is more accurate than recent state-of-the-art algorithms including interior point methods and semidefinite programming. Our superior performance is explained by the fact that we take into account both the binary nature of the individual faults and the sparsity of the fault pattern arising from their rarity.
Recovery of Functions of many Variables via Compressive Sensing by Albert Cohen, Ronald A. DeVore, Simon Foucart, Holger Rauhut. The abstract reads:
Recovery of functions of many variables from sample values usually suffers the curse of dimensionality: The number of required samples scales exponentially with the spatial dimension. In order to avoid this severe bottleneck, one needs to impose further structural properties of the function to be recovered apart from smoothness. Here, we build on ideas from compressive sensing and introduce a function model that involves “sparsity with respect to dimensions” in the Fourier domain. Using recent estimates on the restricted isometry constants of measurement matrices associated to randomly sampled trigonometric systems, we show that the number of required samples scales only logarithmically in the spatial dimension provided the function to be recovered follows the newly introduced highdimensional function model.
Sparse Signal Recovery and Dynamic Update of the Underdetermined System by M. Salman Asif and Justin Romberg. The abstract reads:
A Nonlinear Approach to Dimension Reduction by Lee-Ad Gottlieb, Robert Krauthgamer. The abstract reads:Sparse signal priors help in a variety of modern signal processing tasks. In many cases, a sparse signal needs to be recovered from an underdetermined system of equations. For instance, sparse approximation of a signal with an overcomplete dictionary or reconstruction of a sparse signal from a small number of linear measurements. The reconstruction problem typically requires solving an `1 norm minimization problem. In this paper we present homotopy based algorithms to update the solution of some `1 problems when the system is updated by adding new rows or columns to the underlying system matrix. We also discuss a case where these ideas can be extended to accommodate for more general changes in the system matrix.
The ℓ2 flattening lemma of Johnson and Lindenstrauss [JL84] is a powerful tool for dimension reduction. It has been conjectured that the target dimension bounds can be refined and bounded in terms of the intrinsic dimensionality of the data set (for example, the doubling dimension). One such problem was proposed by Lang and Plaut [LP01] (see also [GKL03, Mat02, ABN08, CGT10]), and is still open. We prove another result in this line of work: The snowflake metric d1/2 of a doubling set S ⊂ ℓ2 can be embedded with arbitrarily low distortion into ℓD2, for dimension D that depends solely on the doubling constant of the metric. In fact, the target dimension is polylogarithmic in the doubling constant. Our techniques are robust and
extend to the more difficult spaces ℓ1 and ℓ∞, although the dimension bounds here are quantitatively inferior than those for ℓ2.
Heat Source Identification Based on L1 Constrained Minimization by Yingying Li, Stanley Osher, Richard Tsai. The abstract reads:
The inverse problem of finding sparse initial data from the solutions to the heat equation is considered. The initial data is assumed to be a sum of an unknown but nite number of Dirac delta functions at unknown locations. Point-wise values of the heat solution at only a few locations are used in an L1 constrained optimization to find such sparse initial data. A concept of domain of effective sensing is introduced to speed up the already fast Bregman iterative algorithm for L1 optimization. Furthermore, an algorithm which successively adds new measurements at intelligent locations is introduced. By comparing the solutions of the inverse problem that are obtained from different number of measurements, the algorithm decides where to add new measurements in order to improve the reconstruction of the sparse initial data.
It is nice to see this fact acknowledged.In compressed sensing [7], we can solve an L0 problems by solving its L1 relaxation when the associated matrix has the restricted isometry property (RIP) [6]. The heat operator does not satisfy RIP, but we can adopt the idea of substituting L0 with L1 for sparse optimization. We will show numerical results which indicate the effectiveness of this strategy. Our approach to this problem is to solve a L1 minimization problem with constraints. We apply the Bregman iterative method [9, 13] to solve the constrained problem as a sequence of unconstrained subproblems. To solve these subproblems, we use the greedy coordinate descent method developed in [11] for solving the L1 unconstrained problem, which was shown to be very efficient for sparse recovery.
An LANL/UCSD newsletter features some hardware performing some compressive sensing:
Embedded computing and sensing are entrenched in many facets of daily life. Embedded devices must be able to collect large amounts of data from multiple sources, and then present the user with an “executive summary” of the observations. A user can then use this distilled information to quickly plan a course of action. It is therefore imperative that new methods are explored for distilling data to a form that is suitable for a user. Furthermore, the prevalence of wireless communications demands that relevant information be preextracted from high-dimension data in order to reduce bandwidth, memory, and energy requirements. Applications of interest to the EI such as structural health monitoring (SHM) and treaty verification typically require the collection of data from large arrays of wireless sensor nodes at high data rates. Data from different sensors must be combined in order to draw inferences about the state of the system under observation. The sensor nodes used to collect these data typically have severe data storage and energy constraints. Wireless transmission of data must be executed in a thoughtful manner and only the most relevant data should be committed to memory. Recently, compressed sensing has presented itself as a candidate solution for directly collecting relevant information from sparse, highdimensional measurements. The main idea behind compressed sensing is that, by directly collecting a relatively small number of coefficients, it is possible to reconstruct the original measurement. The coefficients are derived from linear combinations of measurements. Conveniently, most signals found in nature are indeed sparse with the notable exception of noise. The findings of the compressed sensing community hold great potential for changing the way data are collected. EI and ISR researchers (David Mascarenas, Chuck Farrar, Don Hush, James Thieler) has begun exploring the possibility of incorporating compressed sensing principles into SHM and treaty verification wireless sensor nodes. Advantages offered by compressed sensing include lower energy use for data collection and transmission, as well as reduced memory requirements. Compressed coefficients also have the interesting property that they are democratic, in the sense that no individual coefficient has any more information than any other. In this way they exhibit some robustness against data corruption. It is also worth noting that compressed sensing versions of conventional statistical signal processing techniques have been adopted by EI based on the use of smashed filters. The extension of statistical signal processing to the compressed domain helps facilitate the transition of the SHM strategies to take advantage of compressed sensing techniques. Currently, a digital version of the compressed sensor onboard a microcontroller is developed. (shown in Figure). This compressed sensor node is being tested for a SHM application requiring acceleration measurements, as well as a CO2 climate treaty verification application. The prototype compressed sensor is capable of collecting compressed coefficients from measurements and sending them to an off-board processor for reconstruction. In addition, the smashed filter has also been implemented onboard the embedded compressed sensor node. Preliminary results have shown that the smashed filter successfully distinguishes between the damaged and undamaged states using only 1/32 the number of measurements used in the conventional matched filter. EI plans to extend the compressive sensing to the mobile-host wireless sensing network architectures that has been studied in the past, as compressed sensing holds great promise for distilling data collected from wireless sensor networks.
No comments:
Post a Comment