Page Views on Nuit Blanche since July 2010

Please join/comment on the Google+ Community (1595), the CompressiveSensing subreddit (965), the Facebook page (83 likes), the LinkedIn Compressive Sensing group (3365) or the Advanced Matrix Factorization Group (1072)

Wednesday, June 09, 2010

CS: LinkedIn Discussions, a Public List of Referers, MMDS 2010, PCMI 2010, BioCS

There are 439 members in the Compressive Sensing LinkedIn group. Who is going to be the 500th ? The suspense continues, in the meantime, some of the discussions are very enlightening as other folks are doing a fine job at explaining some of the concepts to newcomers. I also learn out of those.

On top of the more than 1,000 people getting their news from this blog everyday through RSS feeds and E-mail, there are also people coming to the site through websites. I have set up a public referer list on the right hand side of this blog so you can see who is linking to Nuit Blanche. I set it up recently so the listing is still small. It is here. If you want to appear on this list, you know what to do.

The upcoming MMDS 2010 Workshop on Algorithms for Modern Massive Data Sets at Stanford has its program available. Some talks are obviously related to compressive sensing. You have until tomorrow to register.

From Sarah at the Big Numbers blog, I was reminded of this meeting at IAS:

PCMI 2010
June 27 – July 17, 2010
Park City, Utah
Image Processing

Making Mathematical Connections

The Graduate Summer school will feature :

Richard Baraniuk, Rice University
Compressive Sensing: Sparsity-Based Signal Acquisition and Processing
Sensors, imaging systems, and communication networks are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically new sensing modalities. The foundation of today’s digital data acquisition systems is the Shannon/Nyquist sampling theorem, which asserts that to avoid losing information when digitizing a signal or image, one must sample at least two times faster than the signal’s bandwidth, at the so-called Nyquist rate. Unfortunately, the physical limitations of current sensing systems combined with inherently high Nyquist rates impose a performance brick wall to a large class of important and emerging applications.

This lecture will overview some of the recent progress on compressive sensing, a new approach to data acquisition in which analog signals are digitized not via uniform sampling but via measurements using more general, even random, test functions. In stark contrast with conventional wisdom, the new theory asserts that one can combine “sub-Nyquist-rate sampling” with digital computational power for efficient and accurate signal acquisition. The implications of compressive sensing are promising for many applications and enable the design of new kinds of analog-to-digital converters; radio receivers, communication systems, and networks; cameras and imaging systems, and sensor networks.

Antonin Chambolle, École Polytechnique
Total-Variation based image reconstruction.
In the introduction we will recall the reason for which the Total Variation (TV) was introduced as a powerful tool for image recovery. The focus of the first lectures will be mostly on theoretical aspects. The definition and essential properties of the TV will be detailed, variational problems involving the related perimeter functional will also be considered. Then, we will study the “Rudin-Osher-Fatemi” problem (from a convex analysis point of view, the proximal operator associated to the TV). We will try to analyse some interesting properties of the solutions, including regularity issues.

A second part of the lectures will address algorithmic issues and describe the standard and less standard numerical methods for solving efficiently TV-like problems. In a last lecture, we will discuss original extensions which involve TV-like functionals.

Michael Elad, Israel Institute of Technology
Sparse & Redundant Representations – From Theory to Applications in Image Processing
Modeling natural image content is key in image processing. Armed with a proper model, one can handle various tasks such as denoising, restoration, separation, interpolation and extrapolation, compression, sampling, analysis and synthesis, detection, recognition, and more. Indeed, a careful study of the image processing literature reveals that there is an evolution of such models and their use in applications.

This short-course is all about one such model, which I call Sparse-Land for brevity. This specific model is intriguing and fascinating because of the beauty of its theoretical foundations, the superior performance it leads to in various applications, its universality and flexibility in serving various data sources, and its unified view, which makes all the above processing tasks clear and simple. In this course we shall starts with the mathematical foundations of this model, and then turn to present several image processing applications, where it is shown to lead to state-of-the-art results.

Anna Gilbert, University of Michigan
A survey of sparse approximation
The past 10 years have seen a confluence of research in sparse approximation amongst computer science, mathematics, and electrical engineering. Sparse approximation encompasses a large number of mathematical, algorithmic, and signal processing problems which all attempt to balance the size of a (linear) representation of data and the fidelity of that representation. I will discuss several of the basic algorithmic problems and their solutions, including connections to streaming algorithms and compressive sensing.

The undergraduate Summer school program will feature:

An Introduction to Compressed Sensing

Jared Tanner, University of Edinburgh

Most of the signals, images, and other information forms observed in nature exhibit an underlying simplicity. Audio signals often follow a “musical score” with relatively few dominant tones at any time and images are often formed of large smooth regions separated by edges. This simplified structure allows most signals to be compressed efficiently, where a faithful approximation is stored using many fewer units of information. Two familiar examples of this compression are the .mp3 and .jpg formats for audio and images respectively. Despite the ubiquity of compression, we often take great care to acquire high fidelity/resolution representations before compressing them. This striking inefficiency begs the question: can we acquire a compressed representation directly?

Compressed Sensing is a new (appearing in 2004) topic exploring this question, and explaining when and why we are and are not able to sense a compressed representation directly. This course will begin with an introduction to representations in applied and computational harmonic analysis for compression, including Fourier Series, Wavelets, and other time-frequency representations. We will then embark on a tour of selected topics in compressed sensing, studying various algorithms and under what conditions we can guarantee their desired behavior. These topics will be a blend of signal processing, matrix analysis, inverse problems, optimization, and high-dimensional geometry.

Also of interest:

Research Program in Mathematics

“Image Processing”
Complementing the highly structured Graduate Summer School, which is directed at younger mathematicians, the Research Program in Mathematics addresses the needs of mathematicians who are already carrying out research. The program offers advanced scholars the opportunity to do research, collaborate with their peers, meet outstanding students, and explore new teaching ideas with professional educators. It is designed to introduce active areas of research by focusing on a specific topic. The informal format generates lively exchanges of views and information between established and newer researchers.

2010 Research Program in Image Processing

Organizers: Tony F. Chan, University of California-Los Angeles; Ronald A. DeVore, University of South Carolina-Columbia; Stanley Osher, University of California, Los Angeles; Hongkai Zhao, University of California-Irvine

Some lectures on these topics will be accessible to advanced graduate students and postdocs and while others will be intended for more specialized working groups.

A primary goal of the research program is to foster the collaboration of a diverse group of participants. Daily seminars will be held and all Research Program participants have an opportunity to give a seminar if they choose. (The organizers will draw up a schedule in consultation with the participants.) There will be plenty of time for work and informal discussions. A related goal of this program is to highlight the different methods used to address problems in image processing.

New and recent PhD’s are especially encouraged to apply if they are working in the field of image processing.

One of the webcrawler found the following project (we had a Q&A with Esther Rodriguez-Villegas on a Compressive Sensing EEG a while ago).

BioCS-Node: Enabling Ultra-Low-Power Ambulatory Monitoring of Cardiac and Neurological Bioelectrical Signals Using Compressed Sensing

Project Leader: Pierre Vandergheynst of EPFL/STI/IEL/LTS2

David Atienza of EPFL/STI/IEL/ESL , expert in thermal modeling of multiprocessor architectures and thermal management, hardware/software co-design methods

Our modern society is today threatened by an incipient healthcare delivery crisis caused by the current demographic and lifestyle trends. On the one hand, the world's population is fast aging resulting into an increased prevalence of cardiac and neurological disorders. On the other hand, our busy lifestyles leave little time and motivation for fitness, healthy diet management and mental wellness, and are fueling the rise of the number of people unsuspectingly developing or living with chronic cardiovascular and neurological conditions for decades. As a matter of fact, according to the World Health Organization, cardiovascular diseases (CVD) are the number one cause of death worldwide, responsible for an estimated 17.1 million deaths in 2004 (i.e., 29% of all deaths worldwide) and economic fallout in billions1. Moreover, neurological diseases including stroke, neuromotor ailments and sleep disorders affect up to 1 billion people globally, and are a significant cause of morbidity and mortality (i.e., 12% of all deaths globally)2. These increasingly prevalent cardiac and neurological diseases are requiring escalating levels of supervision and medical management, which are contributing to skyrocketing healthcare costs and, more importantly, are unsustainable for traditional healthcare infrastructures. Wireless body sensor network (WBSN) technologies promise to offer large-scale and cost-effective solutions to this problem. Outfitting patients with wearable, miniaturized and wireless sensors able to measure, pre-process and wirelessly report cardiac and neurological signals to telehealth providers would enable the required personalized, long-term and real-time remote monitoring of chronic patients, its seamless integration with the patient's medical record and its coordination with nursing/medical support.

To successfully deploy WBSNs able to perform long-term, remote and clinically relevant monitoring of chronic patients in free-living conditions, it is critical that sensor devices become vanishingly small and autonomous, while retaining their embedded intelligence and wireless capabilities. Current devices in use today, operate on Li-on battery that provides about 1 Watt-hour of energy, and were evidenced to exhibit, for instance, an autonomy of less than a day for single-lead cardiac bioelectrical signal (i.e., electrocardiogram or ECG) sensing and wireless streaming. This ridiculously low autonomy figure is due to the transmission of uncompressed ECG data over power-hungry wireless links. The autonomy figures would be even more compelling for multi-lead ECG and electroencephalogram (EEG) monitoring. Clearly, significant research contributions remain to be made in terms of ultra-low-power embedded compression of ECG and EEG signals and ultra-low-power wireless WBSN connectivity. Within this project, we propose a novel and promising approach to tackle the former challenge. More specifically, we devise low-complexity, yet, powerful multi-lead cardiac and neurological bioelectrical compression techniques and design their supporting ultra-low-power sensor digital processing platform.

Capitalizing on the largely sparse nature of ECG and EEG, we propose to apply the emerging approach to joint sensing and compression for this class of signals, so-called compressed sensing (CS), which promises significant compression ratios while using computationally light linear encoders. This approach is particularly attractive and promising for our target ultra-low-power WBSN-based monitoring systems because the sensor node can very efficiently jointly compress the acquired ECG/EEG signals through a small number of linear signal-independent measurements while preserving their underlying information; only this small number of measurements will be wirelessly transmitted to the remote telehealth center, where the full multi-lead records can be accurately reconstructed using complex non-linear decoding. More importantly, we propose to design a new sensor embedded platform that effectively implements the compressed sensing of cardiac and neurological bioelectrical signals. If successful, this project could lead to a new way of thinking and designing wireless sensing platforms, and would be a the first to demonstrate the ultra-low-power benefits of compressed sensing for cardiac and neurological bioelectrical signals.

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

No comments: