Pages

Monday, June 09, 2008

CS Community: Wikipedia entry, Two meetings, a talk, a postdoc position and a blog.

Found this on the internets:

  • A not so optimal definition of Compressed Sensing on Wikipedia.
  • If you have a better one please put something in the comment section of this entry. I would go for something like this:


    Compressed Sensing is a technique for acquiring a signal utilizing the prior knowledge that it is sparse or compressible.


    I could not make it shorter. What would be your definition ? Similarly, there is no entry on the subject in Scholarpedia.

  • Industrial Mathematics Institute Conference at University of South Carolina

    Theme: Compressed Sensing
    When: Friday, June 16, 2008
    Who:


  • Announcement for DIMACS/DyDAn Working Group on Streaming, Coding, and Compressive Sensing: Unifying Theory and Common Applications to Sparse Signal/Data Analysis and Processing, March 25 - 26, 2009 at the DIMACS/DyDAn Center, CoRE Building, Rutgers University

    Many algorithms and techniques in computer science, optimization and coding theory exploit data sparsity in order to allow for compact signal/data representation and efficient processing. Because an overwhelming number of signals encountered both in nature and man-made networks are inherently sparse, such techniques have found many applications in network security, computational imaging, astronomical data analysis, and the life sciences, in particular, bioinformatics and neuroscience.

    Sparse data processing has also recently received significant attention with the emergence of the new field of compressive sensing, also closely tied to coding and optimization theory, statistics, streaming algorithms analysis, and random matrix theory. Consequently, fundamental advances in this area can have broad impacts on many scientific and mathematical disciplines. The fundamental problem in compressed sensing is to design the fewest measurements of signals so they may be reconstructed efficiently to desired accuracy; the results here lead to ultimately much fewer measurements than the Nyquist sampling rate that has been the golden standard for long. This is an exciting development that has been spawned by new mathematical and algorithmic techniques. There is a growing community of researchers exploring the applications of this breakthrough to various imaging applications in the sciences, analog-to-information conversion in sensor design, and elsewhere. Still, a lot remains to be done in terms of understanding the full power of these techniques, extending the fundamental problem to richer signals and expanding the range of applications.

    Recent research has shown that compressive sensing, coding theory, and streaming algorithms share the same underlying working principles, and that they can be used to address several modern network applications dealing with massive and distributed stored data and data streams, which calls for efficient real-time inference under strict communication, computation and memory (CCM) constraints. A few examples of such applications include identifying a small number of "heavy hitter" ips (delivering a large amount of traffic) out of many millions of source ips that send traffic into a government or corporate network; accurately counting the number of distinct objects simultaneously observed by multiple sensors in a large sensor network with minimum communications; checking consistency and localizing inconsistent records between a master database and its remote copies with billions of records. Exact solutions of many such problems are unattainable because of the CCM constraints.

    We propose a combined activity that will first bring together an interdisciplinary group of researchers in a "working group" and then include them in a larger, more public workshop. We propose to have 1.5 days of a "working group" on March 25-26, 2009, that will explore the fundamental principles shared by compressive sensing, coding, statistics and streaming algorithms from the perspective of theoretical computer science. The organizers have a unique joint expertise in all these areas, and will therefore be able to recruit a diverse group of participants for a working group, performing research at the intersection of these seemingly diverse research fields. The working group would be followed by 1.5 days of a more public workshop on the same theme, emphasizing recent developments, and ending on March 27. These activities will be organized to address past and especially current areas of research as well as identifying promising areas for future research.


  • A presentation by Yonina Eldar on June 17th at the Technion. She is also looking for a postdoc.

  • Andriyan Suksmono from the Institut Technologi Bandug blogs on Compressed Sensing. The blog's name is Chaotic Pearls and has many entries on the subject. Several entries have attendant matlab codes. It's all in Indonesian. I actually wanted to do a similar entry on Nobel prizes given in imaging albeit in English.


  • Image Credit: NASA/JPL-Caltech/UMD/GSFC

    2 comments:

    1. Just to lobby (gently) for a third "S", IMHO the notion of "compressibility" naturally associates with "sensing", "sampling", and "simulation".

      Compressive simulation has close affinities with a well-established mathematical/engineering discipline called "model order reduction" (MOR).

      But in recent years, MOR has acquired an (undeserved) reputation for being rather stodgy -- so it is welcome news that the new ideas of and mathematical methods of CS promise to enliven MOR just as much as they are enlivening many other informatic disciplines.

      There is also the practical fact that in very many enterprises, the necessities of near-optimal sensing, sampling, and simulation arise concurrently ... so if you learn the tools of any one of the three, you might as well learn something about the other two too.

      Spacecraft design, testing, and operations, for example, is a professional discipline in which all three "S"s are essential.

      ReplyDelete
    2. As an follow-up to the above post, Igor's (excellent) definition then natural extends to this: "Compressive simulation (CS) is a technique for simulating a system utilizing the prior knowledge that its state-space trajectories are compressible."

      This definition suggests that the questions-and-answers of compressive simulation are largely identical to the questions-and-answers of compressive sensing and sample.

      So if students learn about any one of compressive sensing, sample, and simulation, they might well learn about the other two also.

      ReplyDelete