Pages

Friday, January 01, 2010

CS: Compressed Sensing Approach for High Throughput Carrier Screen, Compressive Oversampling for Robust Data Transmission in Sensor Networks


After the thank you entry, Yaniv Erlich sent me the following:

...This is the time to thank you for running the blog.
As an outsider (I study genomics), the blog is my only way of keep track on what is going on in the field.
In addition, Noam Shental and I are started collaboration through reading each other papers in your blog.
The collaboration led already to one paper in Allerton 2009, and we are working together on a bigger project now...

You cannot believe how ecstatic I am when I read this, especially on the worthy subject these guys are undertaking (see below). 2010 is already starting nicely. The Allerton paper is (my webcrawler missed it!): Compressed Sensing Approach for High Throughput Carrier Screen by Yaniv Erlich, Noam Shental, Amnon Amir, Or Zuk . The abstract reads:
Carrier screens are widely used in medical genetics to prevent rare genetic disorders. Current detection methods are based on serial processing which is slow and expensive. Here, we discuss a highly efficient compressed sensing approach for ultra-high throughput carrier screens, and highlight both similarities and unique features of our setting compared to the standard compressed sensing framework. Using simulations, we demonstrate the power of compressed carrier screens in a real scenario - finding carriers for rare genetic diseases in Ashkenazi Jews, a population that has well established wide-scale carrier screen programs. We also compare the decoding performance of two typical reconstruction approaches in compressed sensing - GPSR and Belief Propagation. Our results show that Belief Propagation confers better decoding performance in the current application.

The attendant slides are here. Here are some extracts:

Analog parallel processing, here is a good way to summarize the concept


The paws of Dror are all over :-)


These numbers are significant, I wonder how these numbers shrink further when one takes into consideration that individuals are genetically linked to each other, especially when one makes the choice to screen an entire population. While the screening is very important, I also wonder how this group testing approach could be applied to the discovery process.


Finally, on a lighter note, I liked the acknowledgment slide, and more importantly, I liked the fact that one of the members of that team is a real marine biologist. For more information on that, you may want to refresh your mind with this Seinfeld video (which can be put in context with this video):




Quite by accident, I also found the following interesting papers: Compressive Oversampling for Robust Data Transmission in Sensor Networks by Zainul Charbiwala, Supriyo Chakraborty, Sadaf Zahedi, Younghun Kim, Mani B. Srivastava, Ting He, Chatschik Bisdikian. The abstract reads:
Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or compressible, employing CS, not only compresses the data and reduces effective transmission rate, but also improves the robustness of the system to channel erasures. This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we propose that if CS is employed for source compression, then CS can further be exploited as an application layer erasure coding strategy for recovering missing data. We show that CS erasure encoding (CSEC) with random sampling is efficient for handling missing data in erasure channels, paralleling the performance of BCH codes, with the added benefit of graceful degradation of the reconstruction error even when the amount of missing data far exceeds the designed redundancy. Further, since CSEC is equivalent to nominal oversampling in the incoherent measurement basis, it is computationally cheaper than conventional erasure coding. We support our proposal
through extensive performance studies.

and Improving Data Integrity with Randomness -- A Compressive Sensing Approach by Zainul Charbiwala, Sadaf Zahedi, Younghun Kim, Supriyo Chakraborty, Chatschik Bisdikian, Mani B Srivastava. The introduction starts with:
Data loss in wireless sensor systems is inevitable, either due to exogenous (such as transmission medium impediments) or endogenous (such as faulty sensors) causes. While there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) enable a new perspective. Since many natural signals are compressible, it is possible to employ CS, not only to reduce the effective sampling rate, but to improve the robustness of the system at a given Quality of Information (QoI). This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances and sensor malfunctions, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we show how reconstruction error remains unchanged despite extreme independent data losses by
marginally increasing the average sampling rate. We also show that a simple re-ordering of samples prior to communication could enable successful reconstruction when losses display bursty behavior.



Credit: NASA, The Sun as seen by the Soho spacecraft.

1 comment:

  1. Hi Igor,

    Thanks for listing my paper. It's our tiny attempt at contributing to the CS field.

    I'd like to point out something about this paper. I did the completely unthinkable, and *computed* the RIP constant explicitly through a Monte Carlo type simulation. Now, I know that this is technically incorrect, because RIP is defined over *all* possible matrices, but I found this technique to provide fairly consistent results and that match our intuition about the "goodness" of sensing matrices, especially when we're trying to compare one type of sensing system with another.

    I confess that I could have taken an anlytical route to illustrate my point, but could you (or other readers) comment on their experience with computing the RIP explicitly ?

    Thanks,

    Zainul.

    ReplyDelete