Pages

Monday, December 27, 2010

CS: IBAS, Superselectors, Compressive sensing for feedback reduction in MIMO broadcast channels, Reduced-Dimension Multiuser Detection, Efficient reconstruction method for L1 regularization in fluorescence molecular tomography, icalp2011gt

Want to read about a compressive sensing system used to detect unexpected events? Here is one: The INTEGRAL spacecraft has several coded aperture cameras on-board that are used to detect Gamma Ray Bursts. An arxiv summary version of the system is here. A longer version can be found in Diego Götz's thesis A study of Gamma-Ray Bursts and Soft Gamma Repeaters Detected with the INTEGRAL Burst Alert System. The INTEGRAL Burst Alert System webpage is here. The reconstruction is currently performed using traditional linear convolutions which do not provide superresolution. The alert system relies on detection performed on compressed measurements first. When those measurements reach a threshold, it  triggers some additional processing which includes the deconvolution and a comparison between the reconstructed image and the prior knowledge we have about that part of the sky.

Ferdinando Cicalese recently made a presentation in Slovenia entitled: Superselectors: Efficient Constructions and Applications
Superimposed codes represent the main tool for the efficient solution of several problems arising in compressed sensing, cryptography and data security, computational biology, multi-access communication, database theory, pattern matching,  distributed colouring, and circuit complexity, among the others.

It has also become apparent that combinatorial structures strictly related to superimposed codes lie at the heart of an even vaster series of problems. E.g., selectors were instrumental to obtain fast broadcasting algorithms in radio networks, (p,k,n)-selectors were the basic tool for the first two-stage group testing algorithm with an information theoretic optimal number of tests, (d,\ell)- disjunct matrices were a crucial building block for the efficiently decodable non-adaptive group testing procedures.

We shall focus on a new combinatorial structure, superselectors, which encompasses and unifies all of the combinatorial structures mentioned above (and  more). When appropriately instantiated, superselectors asymptotically match the best known constructions of (p,k,n)-selectors, (d, l)-list-disjunct matrices, monotone encodings and (k, alpha)-FUT families, MUT_k(r)-families for multi-access channel.

We shall show some implementations of superselectors which provide optimal approximate group testing schemes and quasi-optimal additive group testing schemes.

Slides from the lecture are available here
While on the 23rd, there was  a seminar at Tsinghua:
Group: Theory Lunch
Title: Instance Compression for NP Problems/Compressive Data Gathering/Deployment Problem in the Wireless Sensor Networks
Speaker: Bangsheng Tang, Liwen Xu, Xiaohong Hao
Date: 12:00 -- 13:30 Dec 23, 2010
Venue: FIT 1-222
Abstract:

Title: Instance Compression for NP problems
Abstract: The OR-SAT problem is to ask given m CNFs, whether at least one of them is satisfiable. In the work of Fortnow and Santhanam, it is proved that OR-SAT cannot be reduce to any set A where the length is bounded by a polynomial in n, unless PH collapses. This result implies upper bounds for several kernelization problem and size of PCPs for SAT, and also implies that there are no subexponential-size hard sets for NP.This is a follow-up of Harnik and Naor's work, and later followed by Dell and van Melkebeek. If time permits, I will try to explain some possible direction of extension of this work.
Title: Compressive Data Gathering

Abstract: Compressive sensing is a new technique in signal processing that collects a low-dimensional projection of a high-dimensional but sparse signal and recovers the original signal by adopting l1-norm optimization which takes the place of l0-norm optimization that turns out to be NP-hard. I will talk about a new scheme of data gathering in wireless sensor network by using compressive sensing distributedly and show some newly observed properties of matrix that satisfies RIP(Restricted Isometry Property).
Title: Deployment problem in the Wireless Sensor Networks
Abstract: Coverage is critical for wireless sensor networks to monitor a region of interest and provide a good quality of service. Here we have two descriptions of the problem. One is place minimum number of sensors to achieve coverage requirement for a region with obstacles. The problem is NP-hard here. I will introduce a algorithm based on deployment pattern and triangulation in the computational geometry. The other description is: for a given number of sensors, maximized the sensor field coverage.  I will present a Virtual Force algorithm as a sensor deployment strategy to enhance to coverage after an initial random placement of sensors.
Today, we have three papers:
Compressive sensing for feedback reduction in MIMO broadcast channels by Syed T. Qaseem, and Tareq Y. Al-Naffouri, Mohammed E. Eltayeb and Hamid Reza Bahrami The abstract reads:
We propose a generalized feedback model and compressive sensing based opportunistic feedback schemes for feedback resource reduction in MIMO Broadcast Channels under the assumption that both uplink and downlink channels undergo block Rayleigh fading. Feedback resources are shared and are opportunistically accessed by users who are strong, i.e. users whose channel quality information is above a certain fixed threshold. Strong users send the same feedback information on all shared channels. They are identified by the base station via compressive sensing. Both analog and digital feedbacks are considered. The proposed analog & digital opportunistic feedback schemes are shown to achieve the same sum-rate throughput as that achieved by dedicated feedback schemes, but with feedback channels growing only logarithmically with number of users. Moreover, there is also a reduction in the feedback load. In the analog feedback case, we show that the proposed scheme reduces the feedback noise which eventually results in better throughput, whereas in the digital feedback case the proposed scheme in a noisy scenario achieves almost the throughput obtained in a noiseless dedicated feedback scenario. We also show that for a given fixed budget of feedback bits, there exists a trade-off between the number of shared channels and thresholds accuracy of the fed back SNR.
We present a new framework for reduceddimension multiuser detection (RD-MUD) that trades off complexity for bit-error-rate (BER) performance. This approach can significantly reduce the number of matched filter branches required by classic multiuser detection designs. We show that the RD-MUD can perform similarly to the linear MUD detector when M is sufficiently large relative to N and K, where N and K are the number of total and active users, respectively. We also study the inherent RD-MUD tradeoff between complexity (the number of correlating signals) and BER performance. This leads to a new notion of approximate sufficient statistics, whereby sufficient statistics are approximated to reduce complexity at the
expense of some BER performance loss.
This one is behind a paywall:  Efficient reconstruction method for L1 regularization in fluorescence molecular tomography by Dong Han, Xin Yang, Kai Liu, Chenghu Qin, Bo Zhang, Xibo Ma, and Jie Tian. The abstract reads;
Fluorescence molecular tomography (FMT) is a promising technique for in vivo small animal imaging. In this paper, the sparsity of the fluorescent sources is considered as the a priori information and is promoted by incorporating L1 regularization. Then a reconstruction algorithm based on stagewise orthogonal matching pursuit is proposed, which treats the FMT problem as the basis pursuit problem. To evaluate this method, we compare it to the iterated-shrinkage-based algorithm with L1 regularization. Numerical simulations and physical experiments show that the proposed method can obtain comparable or even slightly better results. More importantly, the proposed method was at least 2 orders of magnitude faster in these experiments, which makes it a practical reconstruction algorithm.

There is Call for papers for icalp2011gt
Algorithms and Data Structures for selection, identification and encoding.
Group testing, compressed sensing, multi access communications and more.

SCOPE

The workshop aims at bringing together researchers to exchange ideas and results related to the theoretical and practical aspects of group testing, compressed sensing and combinatorial identification in a broad sense. Papers presenting use of group testing and selection primitives in pattern matching, data structures for static membership, communication protocols, cryptographic protocols, streaming computation, bioinfomatics and computational biology, compressed sensing as well as papers focussing on theoretical aspects of combinatorial structures for identification and coding, like randomness extractors, superimposed codes, list-decodable codes, selectors, and the like are welcome.

All the above topics are open to both research and industry contributions. Papers reporting on original research unpublished elsewhere are primarily sought. Surveys of important results, especially recent ones, are also invited.

No comments:

Post a Comment