Pages

Monday, December 07, 2009

CS:This week's long post.

My webcrawler grabbed several articles this week-end and make up this large post. First, the Lecture Notes: Compressive sensing and structured random matrices are still under construction. If you have comments of any sort, Holger Rauhut would be happy to receive them.

While not expressly related to compressive sensing, I noted the following: Real-time convex optimization in signal processing by Jacob Mattingley and Stephen Boyd. The abstract reads:
Convex optimization has been used in signal processing for a long time, to choose coefficients for use in fast (linear) algorithms, such as in filter or array design; more recently, it has been used to carry out (nonlinear) processing on the signal itself. Examples of the latter case include total variation de-noising, compressed sensing, fault detection, and image classification. In both scenarios, the optimization is carried out on time scales of seconds or minutes, and without strict time constraints. Convex optimization has traditionally been considered computationally expensive, so its use has been limited to applications where plenty of time is available. Such restrictions are no longer justified. The combination of dramatically increased computational power, modern algorithms, and new coding approaches has delivered an enormous speed increase, which makes it possible to solve modest-sized convex optimization problems on microsecond or millisecond time scales, and with strict deadlines. This enables real-time convex optimization in signal processing.
specifically from the paper, the authors write:
....As a result, convex optimization problems that 20 years ago might have taken minutes to solve can now be solved in microseconds. This opens up several new possibilities. In the design context, algorithm weights can be re-designed or updated on fast time scales (say, kHz). Perhaps more exciting is the possibility that convex optimization can be emdedded directly in signal processing algorithms that run on-line, with strict real-time deadlines, even at rates of tens of kHz. We will see that solving 10 000 modest sized convex optimization problems per second is entirely possible on a generic processor. This is quite remarkable, since solving an optimization problem is generally considered a computationally challenging task, and few engineers would consider an on-line algorithm, that requires the solution of an optimization problem at each step, to be feasible for signal rates measured in kHz....

Still not exactly compressive sensing: A Fast and Efficient Heuristic Nuclear Norm Algorithm for Rank Minimization by Thong T. Do, Yi Chen, Nam Nguyen, Lu Gan and Trac D. Tran. The abstract reads:
The problem of affine rank minimization seeks to find the minimum rank matrix that satisfies a set of linear equality constraints. Generally, since affine rank minimization is NP-hard, a popular heuristic method is to minimize the nuclear norm that is a sum of singular values of the matrix variable [1]. A recent intriguing paper [2] shows that if the linear transform that defines the set of equality constraints is nearly isometrically distributed and the number of constraints is at least O(r(m + n) logmn), where r and m £ n are the rank and size of the minimum rank matrix, minimizing the nuclear norm yields exactly the minimum rank matrix solution. Unfortunately, it takes a large amount of computational complexity and memory buffering to solve the nuclear norm minimization problem with known nearly isometric transforms. This paper presents a fast and efficient algorithm for nuclear norm minimization that employs structurally random matrices [3] for its linear transform and a projected subgradient method that exploits the unique features of structurally random matrices to substantially speed up the optimization process. Theoretically, we show that nuclear norm minimization using structurally random linear constraints guarantees the minimum rank matrix solution if the number of linear constraints is at least O(r(m+n) log3 mn). Extensive simulations verify that structurally random transforms still retain optimal performance while their implementation complexity is just a fraction of that of completely random transforms, making them promising candidates for large scale applications.
Ok, now we can dive into the subject: Fast and Efficient Dimensionality Reduction using Structurally Random Matrices by Thong T. Do, Lu Gan, Yi Chen, Nam Nguyen and Trac D. Tran. The abstract reads:
Structurally Random Matrices (SRM) were first proposed in [1] as fast and highly efficient measurement operators for large scale compressed sensing applications. Motivated by the bridge between compressed sensing and the Johnson-Lindenstrauss lemma [2] , this paper introduces a related application of SRM regarding to realizing a fast and highly efficient embedding. In particular, it shows that a SRM is also a promising dimensionality reduction transform that preserves all pairwise distances of high dimensional vectors within an arbitrarily small factor ², provided the projection dimension is on the order of O(²¡2 log3 N), where N denotes the number of d-dimensional vectors. In other words, the SRM can be viewed as the suboptimal Johnson-Lindenstrauss embedding that, however, owns very low computational complexity O(d log d) and highly efficient implementation that uses only O(d) random bits, making it a promising candidate for practical, large scale applications where efficiency and speed of computation is highly critical.

Compressed Sensing and Routing in Multi-Hop Networks by Sungwon Lee, Sundeep Pattem, Maheswaran Sathiamoorthy, Bhaskar Krishnamachari, and Antonio Ortega. The abstract reads:
We study how compressed sensing can be combined with routing design for energy efficient data gathering in sensor networks. We first obtain some bounds on the performance of compressed sensing with simple routing schemes. We then formulate a new problem relating routing paths to data projections, and present a centralized, greedy algorithm for obtaining low coherence projections while simultaneously reducing reconstruction error and communication cost. Simulation results show that the effectiveness of standard compressed sensing techniques is limited when routing costs are considered. A naive, randomized downsampling is shown to outperform the standard techniques in terms of achievable SNR for a given energy budget. While our algorithm has better performance than standard compression sensing, it does not improve significantly over downsampling in terms of the cost. Thus we believe that further investigation is needed to determine if novel routing strategies exist that i) provide a sufficient spatial coverage to enable good coherence properties for compressed sensing along those routes, and ii) have transport costs that do not deviate significantly from those of efficient routing techniques.
Inflating Compressed Samples: A Joint-Channel Coding Approach for Noise-Resistant Compressed Sensing by A. Hesam Mohseni, Massoud Babaie-Zadeh and Christian Jutten. The abstract reads:
Recently, a lot of research has been done on compressed sensing, capturing compressible signals using random linear projections to a space of radically lower dimension than the ambient dimension of the signal. The main impetus of this is that the radically dimension lowering linear projection step can be done totally in analog hardware, in some cases even in constant time, to avoid the bottleneck in sensing and quantization steps where a large number of samples need to be sensed and quantized in short order, mandating the use of a large number of fast expensive sensors and A/D converters. Reconstruction algorithms from these projections have been found that come within distortion levels comparable to the state of the art in lossy compression algorithms. This paper considers a variation on compressed sensing that makes it resistant to spiky noise. This is achieved by an analog real-field error-correction coding step. It results in a small asymptotic overhead in the number of samples, but makes exact reconstruction under spiky measurement noise, one type of which is the salt and pepper noise in imaging devices, possible. Simulations are performed that corroborate our claim and infact substantially improve reconstruction under unreliable sensing characteristics and are stable even under small perturbations with Gaussian noise.

Efficiently Decodable Non-adaptive Group Testing by Piotr Indyk, Hung Q. Ngo, Atri Rudra. The abstract reads:
We consider the following \efficiently decodable" nonadaptive group testing problem. There is an unknown string x 2 f0; 1gn with at most d ones in it. We are allowed to test any subset S  [n] of the indices. The answer to the test tells whether xi = 0 for all i 2 S or not. The objective is to design as few tests as possible (say, t tests) such that x can be identi ed as fast as possible (say, poly(t)-time). Efficiently decodable non-adaptive group testing has applications in many areas, including data stream algorithms and data forensics. A non-adaptive group testing strategy can be represented by a t x n matrix, which is the stacking of all the characteristic vectors of the tests. It is well-known that if this matrix is d-disjunct, then any test outcome corresponds uniquely to an unknown input string. Furthermore, we know how to construct d-disjunct matrices with t = O(d2 log n) efficiently. However, these matrices so far only allow for a \decoding" time of O(nt), which can be exponentially larger than poly(t) for relatively small values of d. This paper presents a randomness efficient construction of d-disjunct matrices with t = O(d2 log n) that can be decoded in time poly(d)  t log2 t + O(t2). To the best of our knowledge, this is the rst result that achieves an efficient decoding time and matches the best known O(d2 log n) bound on the number of tests. We also derandomize the construction, which results in a polynomial time deterministic construction of such matrices when d = O(log n= log log n). A crucial building block in our construction is the notion of (d; `)-list disjunct matrices, which represent the more general \list group testing" problem whose goal is to output less than d + ` positions in x, including all the (at most d) positions that have a one in them. List disjunct matrices turn out to be interesting objects in their own right and were also considered independently by [Cheraghchi, FCT 2009]. We present connections between list disjunct matrices, expanders, dispersers and disjunct matrices. List disjunct matrices have applications in constructing (d; `)- sparsity separator structures [Ganguly, ISAAC 2008] and in constructing tolerant testers for Reed-Solomon codes in the data stream model.
Sparse Estimation Using General Likelihoods and Non-Factorial Priors by David Wipf and Srikantan Nagarajan. The abstract reads:
Finding maximally sparse representations from overcomplete feature dictionaries frequently involves minimizing a cost function composed of a likelihood (or data fit) term and a prior (or penalty function) that favors sparsity. While typically the prior is factorial, here we examine non-factorial alternatives that have a number of desirable properties relevant to sparse estimation and are easily implemented using an efficient and globally-convergent, reweighted `1-norm minimization procedure. The first method under consideration arises from the sparse Bayesian learning (SBL) framework. Although based on a highly non-convex underlying cost function, in the context of canonical sparse estimation problems, we prove uniform superiority of this method over the Lasso in that, (i) it can never do worse, and (ii) for any dictionary and sparsity profile, there will always exist cases where it does better. These results challenge the prevailing reliance on strictly convex penalty functions for finding sparse solutions. We then derive a new non-factorial variant with similar properties that exhibits further performance improvements in some empirical tests. For both of these methods, as well as traditional factorial analogs, we demonstrate the effectiveness of reweighted `1-norm algorithms in handling more general sparse estimation problems involving classification, group feature selection, and non-negativity constraints. As a byproduct of this development, a rigorous reformulation of sparse Bayesian classification (e.g., the relevance vector machine) is derived that, unlike the original, involves no approximation steps and descends a well-defined objective function.
Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements by Jarvis Haupt, Richard Baraniuk, Rui Castro and Rovert Nowak. The abstract reads:
The recently-proposed theory of distilled sensing establishes that adaptivity in sampling can dramatically improve the performance of sparse recovery in noisy settings. In particular, it is now known that adaptive point sampling enables the detection and/or support recovery of sparse signals that are otherwise too weak to be recovered using any method based on non-adaptive point sampling. In this paper the theory of distilled sensing is extended to highly-undersampled regimes, as in compressive sensing. A simple adaptive sampling-and-refinement procedure called compressive distilled sensing is proposed, where each step of the procedure utilizes information from previous observations to focus subsequent measurements into the proper signal subspace, resulting in a significant improvement in effective measurement SNR on the signal subspace. As a result, for the same budget of sensing resources, compressive distilled sensing can result in significantly improved error bounds compared to those for traditional compressive sensing.
A Note on Optimal Support Recovery in Compressed Sensing by Galen Reeves and Michael Gastpar. The abstract reads:
Recovery of the support set (or sparsity pattern) of a sparse vector from a small number of noisy linear projections (or samples) is a “compressed sensing” problem that arises in signal processing and statistics. Although many computationally efficient recovery algorithms have been studied, the optimality (or gap from optimality) of these algorithms is, in general, not well understood. In this note, approximate support recovery under a Gaussian prior is considered, and it is shown that optimal estimation depends on the recovery metric in general. By contrast, it is shown that in the SNR limits, there exist uniformly near-optimal estimators, namely, the ML estimate in the high SNR case, and a computationally trivial thresholding algorithm in the low SNR case.
Sublinear-Time Fourier Algorithms with Recovery Guarantees and Uniformly Bounded Sampling Requirements by Mark Iwen. The abstract reads:
We study the problem of quickly estimating the best k term Fourier representation for a given frequency-sparse signal (i.e., vector or array) A of length N  k. In essence, we investigate how to identify k of the largest magnitude frequencies of ˆ A, and estimate their coefficients, in k2 · polylog(kˆ Ak1,N) time. Randomized sublinear-time Monte Carlo algorithms, which have a small probability of failing to output accurate answers for each input signal, exist for solving this problem [15, 16, 18]. In this paper we develop the first known Las Vegas sparse Fourier Transform methods which are guaranteed to produce accurate results, and run in sublinear-time on each input signal A 2 CN with high probability. The Las Vegas results developed herein improve on recent deterministic Fourier methods [19, 18, 1] by guaranteeing accurate answers while having both (i) substantially smaller (i.e., near-optimal) guaranteed sampling requirements, and (ii) better runtime complexity on each input signal with high probability.

Group Sparse Coding by Samy Bengio, Fernando Pereira, Yoram Singer, Dennis Strelow. The abstract reads:
Bag-of-words document representations are often used in text, image and video processing. While it is relatively easy to determine a suitable word dictionary for text documents, there is no simple mapping from raw images or videos to dictionary terms. The classical approach builds a dictionary using vector quantization over a large set of useful visual descriptors extracted from a training set, and uses a nearest-neighbor algorithm to count the number of occurrences of each dictionary word in documents to be encoded. More robust approaches have been proposed recently that represent each visual descriptor as a sparse weighted combination of dictionary words. While favoring a sparse representation at the level of visual descriptors, those methods however do not ensure that images have sparse representation. In this work, we use mixed-norm regularization to achieve sparsity at the image level as well as a small overall dictionary. This approach can also be used to encourage using the same dictionary words for all the images in a class, providing a discriminative signal in the construction of image representations. Experimental results on a benchmark image classification dataset show that when compact image or dictionary representations are needed for computational efficiency, the proposed approach yields better mean average precision in classification.

and finally a poster:Ear-Phone-Assessment of Noise Pollution with Mobile Phones by Rajib Kumar Rana, Chun Tung Chou, Salil Kanhere, Nirupama Bulusu, Wen Hu. The abstract reads:
Noisemap can provide useful information to control noise pollution. We propose a people-centric noise collection system called the Ear-Phone. Due to the voluntary participation of people, the number and location of samples cannot be guaranteed. We propose and study two methods, based on compressive sensing, to reconstruct the missing samples.


Image Credit: NASA/JPL/Space Science Institute, W00061912.jpg was taken on December 04, 2009 and received on Earth December 05, 2009. The camera was pointing toward SATURN at approximately 2,157,548 kilometers away, and the image was taken using the CB2 and CL2 filters.

No comments:

Post a Comment