Today we have a really long post but first, I wanted to mention the following: 
Dataset  B: Coded Exposure Images using Canon Camera and Ferro-Electric Shutter
(Download  complete matlab code and all four input files)
                           
             | Indoor Toy  Train (118 pixel blur)
 |  | 
             | Outdoor  Green Taxi (235 pixel blur)
 |  | 
             | Outdoor  White Taxi (66 pixel blur)
 
 |  | 
             | Outdoor Car  License Plate (60 pixel blur)
 
 |  | 
Thanks.   I agree that Candes is particularly well qualified to make this   decision.
  -- Cleve
  
But the
  newsletter still has the wrong translation, muuuhhh. I like Cleve and his software, but maybe we should call their software Mathlabs or something.
* While we are on the subject of language, I think I should  be able to learn Spanish by just reading 
this  introduction to compressed sensing (the whole document is 
Algoritmos  y aplicaciones de Compressive Sensing by María José Díaz Sánchez)   Though, I really need to hear someone speak it. It is fascinating how a  domain expertise really allows one to hop back and forth between  languages. Maybe we should have a Rosetta stone explaining Compressed  Sensing in many languages, including 
Indonesian.
* 
Stephane  Chretien updated his page to include a code to his recent paper. From  his page:
Sparse recovery with unknown variance: a  LASSO-type approach. (with 
Sebastien Darses)  
PDF.  Submitted. An illustration using 
NESTAv1.1  of Lemma 5.3 about the support of the LASSO solution can be found 
here.  The results can be checked using the modification of S. Becker and J.  Bobin's Matlab code available 
here.
 
The  Random Demodulator (RD) and the ModulatedWideband Converter (MWC) are  two recently proposed compressed sensing (CS) techniques for the  acquisition of continuous-time spectrally-sparse signals. They extend  the standard CS paradigm from sampling discrete, finite dimensional  signals to sampling continuous and possibly infinite dimensional ones,  and thus establish the ability to capture these signals at sub-Nyquist  sampling rates. The RD and the MWC have remarkably similar structures  (similar block diagrams), but their reconstruction algorithms and signal  models strongly differ. To date, few results exist that compare these  systems, and owing to the potential impacts they could have on spectral  estimation in applications like electromagnetic scanning and cognitive  radio, we more fully investigate their relationship in this paper.  Specifically, we show that the RD and the MWC are both based on the  general concept of random filtering, but that the sampling functions  characterising the systems differ significantly. We next demonstrate a  previously unreported model sensitivity the MWC has to short duration  signals that resembles the known sensitivity the RD has to nonharmonic  tones. We also show that block convolution is a fundamental aspect of  the MWC, allowing it to successfully sample and reconstruct block-sparse  (multiband) signals. This aspect is lacking in the RD, but we use it to  propose a new CS based acquisition system for continuous-time signals  whose amplitudes are block sparse. The paper includes detailed time and  frequency domain analyses of the RD and the MWC that differ, sometimes  substantially, from published results.
In  this paper, we consider power spectral density estimation of  bandlimited, wide-sense stationary signals from sub-Nyquist sampled  data. This problem has recently received attention from within the  emerging field of cognitive radio for example, and solutions have been  proposed that use ideas from compressed sensing and the theory of  digital alias-free signal processing. Here we develop a compressed  sensing based technique that employs multi-coset sampling and produces  multi-resolution power spectral estimates at arbitrarily low average  sampling rates. The technique applies to spectrally sparse and nonsparse  signals alike, but we show that when the widesense stationary signal is  spectrally sparse, compressed sensing is able to enhance the estimator.  The estimator does not require signal reconstruction and can be  directly obtained from a straightforward application of nonnegative  least squares.
Sparse Interactions:  Identifying High-Dimensional Multilinear Systems via Compressed Sensing  by 
Bobak Nazer and 
Robert D. Nowak. The abstract  reads:
This paper  investigates the problem of identifying sparse multilinear systems. Such  systems are characterized by multiplicative interactions between the  input variables with sparsity meaning that relatively few of all  conceivable interactions are present. This problem is motivated by the  study of interactions among genes and proteins in living cells. The goal  is to develop a sampling/sensing scheme to identify sparse multilinear  systems using as few measurements as possible. We derive bounds on the  number of measurements required for perfect reconstruction as a function  of the sparsity level. Our results extend the notion of compressed  sensing from the traditional notion of (linear) sparsity to more refined  notions of sparsity encountered in nonlinear systems. In contrast to  the linear sparsity models, in the multilinear case the pattern of  sparsity may play a role in the sensing requirements. 
Structured  sublinear compressive sensing via dense belief propagation by 
Wei Dai, 
Olgica Milenkovic,  Hoa Vin Pham. The abstract reads:
Compressive  sensing (CS) is a sampling technique designed for reducing the  complexity of sparse data acquisition. One of the major obstacles for  practical deployment of CS techniques is the signal reconstruction time  and the high storage cost of random sensing matrices. We propose a new  structured compressive sensing scheme, based on codes of graphs, that  allows for a joint design of structured sensing matrices and  logarithmic-complexity reconstruction algorithms. The compressive  sensing matrices can be shown to offer asymptotically optimal  performance when used in combination with Orthogonal Matching Pursuit  (OMP) methods. For more elaborate greedy reconstruction schemes, we  propose a new family of dense list decoding belief propagation  algorithms, as well as reinforced- and multiple-basis belief propagation  algorithms. Our simulation results indicate that reinforced BP CS  schemes offer very good complexity-performance tradeoffs for very sparse  signal vectors. 
This  paper presents an average case denoising performance analysis for SP,  CoSaMP and IHT algorithms. This analysis considers the recovery of a  noisy signal, with the assumptions that it is corrupted by an additive  random white Gaussian noise and has a K-sparse representation with  respect to a known dictionary D. The proposed analysis is based on the  RIP, establishing a near-oracle performance guarantee for each of these  algorithms. Similar RIP-based analysis was carried out previously for  Dantzig Selector (DS) and Basis Pursuit (BP). Past work also considered a  mutual-coherence-based analysis of OMP and thresholding. The  performance guarantees developed in this work resemble those obtained  for the relaxation-based methods (DS and BP), suggesting that the  performance is independent of the sparse representation entries contrast  and magnitude which is not true for OMP and thresholding. 
Sparse  Recovery, Kashin Decomposition and Conic Programming by 
Alexandre d'Aspremont.  The abstract reads:
We  produce relaxation bounds on the diameter of arbitrary sections of the  l1 ball in R^n. We use these results to test conditions for sparse  recovery. 
CSSF  MIMO RADAR: Low-Complexity Compressive Sensing Based MIMO Radar That  Uses Step Frequency by Yao Yu, 
Athina P.  Petropulu, 
H.  Vincent Poor. The abstract reads:
A new approach is proposed, namely CSSF  MIMO radar, which applies the technique of step frequency (SF) to  compressive sensing (CS) based multi-input multi-output (MIMO) radar.  The proposed approach enables high resolution range, angle and Doppler  estimation, while transmitting narrowband pulses. The problem of joint  angle-Doppler-range estimation is first formulated to fit the CS  framework, i.e., as an L1 optimization problem. Direct solution of this  problem entails high complexity as it employs a basis matrix whose  construction requires discretization of the angle-Doppler-range space.  Since high resolution requires fine space discretization, the complexity  of joint range, angle and Doppler estimation can be prohibitively high.  For the case of slowly moving targets, a technique is proposed that  achieves significant complexity reduction by successively estimating  angle-range and Doppler in a decoupled fashion and by employing initial  estimates obtained via matched filtering to further reduce the space  that needs to be digitized. Numerical results show that the combination  of CS and SF results in a MIMO radar system that has superior resolution  and requires far less data as compared to a system that uses a matched  filter with SF. 
A Novel Approach for Fast Detection of Multiple Change Points in Linear Models by 
Xiaoping Shi, 
Yuehua Wu, Baisuo Jin. The abstract reads:
A change point problem occurs in many statistical applications. If there exist change points in a model, it is harmful to make a statistical analysis without any consideration of the existence of the change points and the results derived from such an analysis may be misleading. There are rich literatures on change point detection. Although many methods have been proposed for detecting multiple change points, using these methods to find multiple change points in a large sample seems not feasible. In this article, a connection between multiple change point detection and variable selection through a proper segmentation of data sequence is established, and a novel approach is proposed to tackle multiple change point detection problem via the following two key steps: (1) apply the recent advances in consistent variable selection methods such as SCAD, adaptive LASSO and MCP to detect change points; (2) employ a refine procedure to improve the accuracy of change point estimation. Five algorithms are hence proposed, which can detect change points with much less time and more accuracy compared to those in literature. In addition, an optimal segmentation algorithm based on residual sum of squares is given. Our simulation study shows that the proposed algorithms are computationally efficient with improved change point estimation accuracy. The new approach is readily generalized to detect multiple change points in other models such as generalized linear models and nonparametric models. 
Peak Reduction and Clipping Mitigation by Compressive Sensing by Ebrahim B. Al-Safadi, 
Tareq Y. Al-Naffouri. The abstract reads:
  This work establishes the design, analysis, and fine-tuning of a Peak-to-Average-Power-Ratio (PAPR) reducing system, based on compressed sensing at the receiver of a peak-reducing sparse clipper applied to an OFDM signal at the transmitter. By exploiting the sparsity of the OFDM signal in the time domain relative to a pre-defined clipping threshold, the method depends on partially observing the frequency content of extremely simple sparse clippers to recover the locations, magnitudes, and phases of the clipped coefficients of the peak-reduced signal. We claim that in the absence of optimization algorithms at the transmitter that confine the frequency support of clippers to a predefined set of reserved-tones, no other tone-reservation method can reliably recover the original OFDM signal with such low complexity.
    Afterwards we focus on designing different clipping signals that can embed a priori information regarding the support and phase of the peak-reducing signal to the receiver, followed by modified compressive sensing techniques for enhanced recovery. This includes data-based weighted {\ell} 1 minimization for enhanced support recovery and phase-augmention for homogeneous clippers followed by Bayesian techniques.
    We show that using such techniques for a typical OFDM signal of 256 subcarriers and 20% reserved tones, the PAPR can be reduced by approximately 4.5 dB with a significant increase in capacity compared to a system which uses all its tones for data transmission and clips to such levels. The design is hence appealing from both capacity and PAPR reduction aspects. 
Remarks on the Restricted Isometry Property in Orthogonal Matching Pursuit algorithm by 
Qun Mo, Yi Shen. The abstract reads:
  This paper demonstrates theoretically that if the restricted isometry constant $\delta_K$ of the compressed sensing matrix satisfies $$ \delta_{K+1} < \frac{1}{\sqrt{K}+1}, $$ then a greedy algorithm called Orthogonal Matching Pursuit (OMP) can recover a signal with $K$ nonzero entries in $K$ iterations. In contrast, matrices are also constructed with restricted isometry constant $$ \delta_{K+1} = \frac{1}{\sqrt{K}} $$ such that OMP can not recover $K$-sparse $x$ in $K$ iterations. This result shows that the conjecture given by Dai and Milenkovic is ture.
 I am sure they meant True, not ture.
Distributed  Representation of Geometrically Correlated Images with Compressed  Linear Measurements by 
Vijayaraghavan  Thirumalai ; 
Pascal   Frossard. The abstract reads:
 The distributed representation of correlated  images is an important challenge in applications such as multi-view  imaging in camera networks or low complexity video coding. This paper  addresses the problem of distributed coding of images whose correlation  is driven by the motion of objects or the positioning of the vision  sensors. It concentrates on the problem where images are encoded with  compressed linear measurements, which are used for estimation of the  correlation between images at decoder. We propose a geometry-based  correlation model in order to describe the correlation between pairs of  images. We assume that the constitutive components of natural images can  be captured by visual features that undergo local transformations  (e.g., translation) in different images. These prominent visual features  are estimated with a sparse approximation of a reference image by a  dictionary of geometric basis functions. The corresponding features in  the other images are then identified from the compressed measurements.  The correlation model is given by the relative geometric transformations  between corresponding features. We thus formulate a regularized  optimization problem for the estimation of correspondences where the  local transformations between images form a consistent motion or  disparity map. Then, we propose an efficient joint reconstruction  algorithm that decodes the compressed images such that they stay  consistent with the quantized measurements and the correlation model.  Experimental results show that the proposed algorithm effectively  estimates the correlation between images in video sequences or  multi-view data. In addition, the proposed reconstruction strategy  provides effective decoding performance that compares advantageously to  distributed coding schemes based on disparity or motion learning and to  independent coding solution based on JPEG-2000.
 by
 Matthieu Guerquin-Kern,  J.-C. Baritaux, 
Michael Unser.  The abstract reads:
Most  bioimaging modalities rely on indirect measurements of the quantity  under investigation. The image is obtained as the result of an  optimization problem involving a physical model of the measurement  system. Due to the ill-posedness of the above problem, the impact of the  noise on the reconstructed images must be controlled. The recent  emphasis in biomedical image reconstruction is on regularization schemes  that favor sparse solutions, which renders the optimization problem  non-smooth. In this work, we show how step-size adaptation can be used  to speed up the most recent multi-step algorithms (e.g. FISTA) employed  in sparse image recovery. We present experiments in MRI and Fluorescence  Molecular Tomography with specifically tailored step-adaptation  strategies. Our results demonstrate the possibility of an  order-of-magnitude speed enhancement over state-of-the-art algorithms. 
In this paper we introduce a nonuniform  sparsity model and analyze the performance of an optimized weighted `1  minimization over that sparsity model. In particular, we focus on a  model where the entries of the unknown vector fall into two sets, with  entries of each set having a specific probability of being nonzero. We  propose a weighted `1 minimization recovery algorithm and analyze its  performance using a Grassmann angle approach. We compute explicitly the  relationship between the system parameters-the weights, the number of  measurements, the size of the two sets, the probabilities of being  nonzero- so that when i.i.d. random Gaussian measurement matrices are  used, the weighted `1 minimization recovers a randomly selected signal  drawn from the considered sparsity model with overwhelming probability  as the problem dimension increases. This allows us to compute the  optimal weights. We demonstrate through rigorous analysis and  simulations that for the case when the support of the signal can be  divided into two different subclasses with unequal sparsity fractions,  the weighted `1 minimization outperforms the regular `1 minimization  substantially. We also generalize our results to signal vectors with an  arbitrary number of subclasses for sparsity. 
Penalized  least squares regression is often used for signal denoising and inverse  problems, and is commonly interpreted in a Bayesian framework as a  Maximum A Posteriori (MAP) estimator, the penalty function being the  negative logarithm of the prior. For example, the widely used quadratic  program (with an $\ell^1$ penalty) associated to the LASSO / Basis  Pursuit Denoising is very often considered as MAP estimation under a  Laplacian prior in the context of additive white Gaussian noise (AWGN)  reduction. This paper highlights the fact that, while this is {\em one}  possible Bayesian interpretation, there can be other equally acceptable  Bayesian interpretations. Therefore, solving a penalized least squares  regression problem with penalty $\phi(x)$ need not be interpreted as  assuming a prior $C\cdot \exp(-\phi(x))$ and using the MAP estimator. In  particular, it is shown that for {\em any} prior $P_X$, the minimum  mean square error (MMSE) estimator is the solution of a penalized least  square problem with some penalty $\phi(x)$, which can be interpreted as  the MAP estimator with the prior $C \cdot \exp(-\phi(x))$. Vice-versa,  for {\em certain} penalties $\phi(x)$, the solution of the penalized  least squares problem is indeed the MMSE estimator, with a certain prior  $P_X$. In general $dP_X(x) \neq C \cdot \exp(-\phi(x))dx$.
 by 
Gitta Kutyniok.  The abstract reads;
Anisotropic  decompositions using representation systems such as curvelets,  contourlet, or shearlets have recently attracted significantly increased  attention due to the fact that they were shown to provide optimally  sparse approximations of functions exhibiting singularities on lower  dimensional embedded manifolds. The literature now contains various  direct proofs of this fact and of related sparse approximation results.  However, it seems quite cumbersome to prove such a canon of results for  each system separately, while many of the systems exhibit certain  similarities. In this paper, with the introduction of the concept of  sparsity equivalence, we aim to provide a framework which allows  categorization of the ability for sparse approximations of  representation systems. This framework, in particular, enables  transferring results on sparse approximations from one system to  another. We demonstrate this concept for the example of curvelets and  shearlets, and discuss how this viewpoint immediately leads to novel  results for both systems. 
Behind  a paywall:
Automatic   Container Code Recognition Using Compressed Sensing Method by  Chien-Cheng Tseng and Su-Ling Lee. The abstract reads:
In this paper, an  automatic container  code recognition method is presented by using compressed sensing (CS).  First, the compressed sensing approach which uses the constrained L1  minimization method is reviewed. Then, a general pattern recognition  framework based on CS theory is described. Next, the CS recognition  method is applied to construct an automatic container code recognition  system. Finally, the real-life images provided by trading port of  Kaohsiung are used to evaluate the performance of the proposed method. 
Here are a list of papers that caught my attention as they werre directly or indirectly related to compressive sensing:
Computational  Cameras: Approaches, Benefits and Limits by 
Shree K. Nayar. The  abstract reads: 
A  computational camera uses a combination of optics and software to  produce images that cannot be taken with traditional cameras. In the  last decade, computational imaging has emerged as a vibrant field of  research. A wide variety of computational cameras have been demonstrated  - some designed to achieve new imaging functionalities, and others to  reduce the complexity of traditional imaging. In this article, we  describe how computational cameras have evolved and present a taxonomy  for the technical approaches they use. We explore the benefits and  limits of computational imaging, and discuss how it is related to the  adjacent and overlapping fields of digital imaging, computational  photography and computational image sensors. 
We  consider a generalization of the classical group testing problem. Let us  be given a sample contaminated with a chemical substance. We want to  estimate the unknown concentration c of this substance in the sample.  There is a threshold indicator which can detect whether the  concentration is at least a known threshold. We consider both the case  when the threshold indicator does not affect the tested units and the  more difficult case when the threshold indicator destroys the tested  units. For both cases, we present a family of efficient algorithms each  of which achieves a good approximation of c using a small number of  tests and of auxiliary resources. Each member of the family provides a  different tradeoff between the number of tests and the use of other  resources involved by the algorithm. Previously known algorithms for  this problem use more tests than most of our algorithms do.
  by Huilan Chang, Hong-Bin Chen, 
Hung-Lin Fu. The  abstract reads:
Pooling designs are common tools to efficiently distinguish  positive clones from negative clones in clone library screening. In  some applications, there is a third type of clones called \inhibitors"  whose effect is in a sense to obscure the positive clones in pools.  Various inhibitor models have been proposed in the literature. We  address the inhibitor problems of designing efficient nonadaptive  procedures for both identification and classification problems, and  improve previous results in three aspects:
* The algorithm that is used to identify  the positive clones works on a more general inhibitor model and has a  polynomial-time decoding procedure that recovers the set of positives  from the knowledge of the outcomes.
* The algorithm that is used to classify all clones works  in one-stage, i.e., all tests are arranged in advance without knowing  the outcomes of other tests, along with a polynomial-time decoding  procedure.
We extend our  results to pooling designs on complexes where the property to be  screened is defined on subsets of biological objects, instead of on  individual ones.
Group testing is a search paradigm where  one is given a population [Formula: see text] of n elements and an  unknown subset [Formula: see text] of defective elements and the goal is  to determine [Formula: see text] by performing tests on subsets of  [Formula: see text]. In classical group testing a test on a subset  [Formula: see text] receives a YES response if [Formula: see text], and a  NO response otherwise. In group testing with inhibitors (GTI),  identifying the defective items is more difficult due to the presence of  elements called inhibitors that interfere with the queries so that the  answer to a query is YES if and only if the queried group contains at  least one defective item and no inhibitor. In the present article, we  consider a new generalization of the GTI model in which there are two  unknown thresholds h and g and the response to a test is YES both in the  case when the queried subset contains at least one defective item and  less than h inhibitors, and in the case when the queried subset contains  at least g defective items. Moreover, our search model assumes that no  knowledge on the number [Formula: see text] of defective items is given.  We derive lower bounds on the minimum number of tests required to  determine the defective items under this model and present an algorithm  that uses an almost optimal number of tests.
This paper studies the "explanation  problem" for tree- and linearly-ordered array data, a problem motivated  by database applications and recently solved for the one-dimensional  tree-ordered case. In this paper, one is given a matrix A whose rows and  columns have semantics: special subsets of the rows and special subsets  of the columns are meaningful, others are not. A submatrix in A is said  to be meaningful if and only if it is the cross product of a meaningful  row subset and a meaningful column subset, in which case we call it an  "allowed rectangle." The goal is to "explain" A as a sparse sum of  weighted allowed rectangles. Specifically, we wish to find as few  weighted allowed rectangles as possible such that, for all i,j, a_{ij}  equals the sum of the weights of all rectangles which include cell  (i,j).
In this paper we  consider the natural cases in which the matrix dimensions are  tree-ordered or linearly-ordered. In the tree-ordered case, we are given  a rooted tree T1 whose leaves are the rows of A and another, T2, whose  leaves are the columns. Nodes of the trees correspond in an obvious way  to the sets of their leaf descendants. In the linearly-ordered case, a  set of rows or columns is meaningful if and only if it is contiguous.
For tree-ordered data, we prove the  explanation problem NP-Hard and give a randomized 2-approximation  algorithm for it. For linearly-ordered data, we prove the explanation  problem NP-Hard and give a 2.56-approximation algorithm. To our  knowledge, these are the first results for the problem of sparsely and  exactly representing matrices by weighted rectangles. 
Asynchronous  Code-Division Random Access Using Convex Optimization by 
Lorne  Applebaum, 
Waheed  U. Bajwa, 
Marco F.  Duarte, 
Robert  Calderbank. The abstract reads:
Many applications in cellular systems  and sensor networks involve a random subset of a large number of users  asynchronously reporting activity to a base station. This paper examines  the problem of multiuser detection (MUD) in random access channels for  such applications. Traditional orthogonal signaling ignores the random  nature of user activity in this problem and limits the total number of  users to be on the order of the number of signal space dimensions.  Contention-based schemes, on the other hand, suffer from delays caused  by colliding transmissions and the hidden node problem. In contrast,  this paper presents a novel asynchronous (non-orthogonal) code-division  random access scheme along with a convex optimization-based MUD  algorithm that overcomes the issues associated with orthogonal signaling  and contention-based methods. Two key distinguishing features of the  proposed algorithm are that it does not require knowledge of the delay  or channel state information of every user and it has polynomial-time  computational complexity. The main analytical contribution of this paper  is the relationship between the performance of the proposed MUD  algorithm and two simple metrics of the set of user codewords. The study  of these metrics is then focused on two specific sets of codewords,  random binary codewords and specially constructed algebraic codewords,  for asynchronous random access. The ensuing analysis confirms that the  proposed scheme together with either of these two codeword sets  significantly outperforms the orthogonal signaling-based random access  in terms of the total number of users in the system.
Following are three announcement for three talks, some have passed but what is important is the abstract of the talks:
iCME Colloquium, Resolution Analysis for Compressive Imaging
Information
* Wednesday
March 2, 2011
* 4:15 pm - 5:05 pm
* Bldg. 380, Rm. 380/380X
Albert Fannjiang (University of California, Davis)
In this talk we will focus on the issue of resolution when compressed sensing techniques are applied to imaging and inverse problems. The physical process of measurement consists of illumination/emission, wave propagation and reception/sampling, giving rise to measurement matrices more structured than those typically studied in the compressed sensing literature. We will show how to achieve the diffraction limit in diffraction tomography, inverse scattering and radio interferometry by using random sampling. We will also discuss the superresolution effect resulting from near-field measurements, random illumination and multiple measurements in conjunction with random sampling.
Fast dimensionality reduction: improved bounds and implications for compressed sensing
Event on 2011-01-20 16:10:00
Fast dimensionality reduction: improved bounds and implications for compressed sensing Colloquium | January 20 | 4:10-5 p.m. | 60 Evans Hall
Speaker: Rachel Ward, Courant Institute, New York University Sponsor: Mathematics, Department of
Embedding high-dimensional data sets into subspaces of much lower dimension is important for reducing storage cost and speeding up computation in several applications, including numerical linear algebra, manifold learning, and computer science. The relatively new field of compressed sensing is based on the observation that if the high-dimensional data are sparse in a known basis, they can be embedded into a lower-dimensional space in a manner that permits their efficient recovery through l1-minimization. We first give a brief overview of compressed sensing, and discuss how certain statistical procedures like cross validation can be naturally incorporated into this set-up. The latter part of the talk will focus on a near equivalence of two fundamental concepts in compressed sensing: the Restricted Isometry Property and the Johnson-Lindenstrauss Lemma; as a consequence of this result, we can improve on the best-known bounds for dimensionality reduction using structured, or fast linear embeddings. Finally, we discuss the Restricted Isometry Property for structured measurement matrices formed by subsampling orthonormal polynomial systems, and high-dimensional function approximation from a few samples.
Event Contact: 510-642-6550
at University of California, Berkeley
110 Sproul Hall
Berkeley, United States
Speaker:  Qiyu Sun, Approximation Property in Compressive Sampling
Wednesday, January 19, 2011
2:00 PM  to 4:00 PM
3076  Duncan Hall
Rice University
6100 Main St
Houston, Texas, USA
The null space property and the restricted isometry property for a measurement matrix are two basic properties in compressive sampling, and are closely related to the sparse approximation. In this talk, we introduce the sparse approximation property for a measurement matrix, a weaker version of the restricted isometry property and a stronger version of the null space property. We show that the sparse approximation property for a measurement matrix could be an appropriate condition to consider stable recovery of any compressible signal from its noisy measurements.
Finally, some summer jobs:
Internship Opportunity (2011 summer) at Mitsubishi Electric Research Labs (MERL), Cambridge, MA, USA (http://www.merl.com/)
Posting Date: Jan 23, 2011
Internship Positions: Multiple
MERL is the North American corporate research and development organization of the Mitsubishi Electric Corporation (MELCO). Based in Cambridge, MA, MERL offers a challenging and stimulating research environment and opportunity to pursue independent projects and receive authorship on publications. The imaging group at MERL does research concerning information capture/extraction/analysis and display including computer vision, computational photography, multi-projector systems, image processing, machine learning and 3D graphics & modeling. 
MERL invites applications for summer internship in 2011 in the imaging group. Graduate students pursuing a Phd degree in EE, CS or a related discipline are encouraged to apply. Publications at the flagship conferences (ICCV/CVPR/SIGGRAPH) as a result of the internship are highly encouraged. Development will be a mix of prototyping in Matlab and/or C/C++ and strong programming skills are required.
Specific research topics of interest include sparse/dense 3D reconstruction, stereo and multi-view stereo, SLAM, structure light scanning, wide-angle imaging using catadioptric sensors and 1D signal processing on DSP's. Applicants are expected to be familiar with the fundamentals of computer vision and image/video processing, along with sufficient knowledge in these areas. Prior publication in these areas is a plus. 
Please submit resumes to Amit Agrawal. For more information about past projects, please visit  www.amitkagrawal.com