It looks like the stats of the blog are down this week, it may be a combination of workshop attendances in far away places and/or Spring Break.
In yesterday's entry, one commenter pointed out that one of the arxiv paper seemed to lack some reference to prior work. As a general rule, if you provide me with some fair comments, I can forward those to the authors of the papers while keeping your name anonymous.
Angshul Majumdar asked the following question:
I do not really understand how Distributed Sensor Networks benefit from CS. In general the main constraint in DSN is the battery life of the sensors. Now the battery is consumed for communications. Therefore the final aim is to reduce the communication cost of each sensor. Now, in general if there are n sensors each one will send one piece of data to the fusion center. But by CS techniques each center needs to multiple its values to a k dimensional vector and send out k pieces of data. What CS argues is that all the sensors need to send out k pieces of data, and they are added in air by the superposition of radio waves, and the fusion center decodes the n dimensional vector from the k dimensional vector. The problem is that each sensor is sending out k pieces instead of one. Therefore it is likely that the battery burns our faster. May be I haven't understood the problem. Can you explain? Or who is the right person to enquire about this?
Anybody care to answer Angshul ?
Gabriel Peyre let me know that he will be presenting a seminar on Wednesday April 1st at IHP in Paris. His talk is entitled From Compressive Sampling to Compressive Computing and it is a joint work with Laurent Demanet.
Also, Shiqian Ma just let me know that he has set up a website for his " FPCA code (Fixed Point Continuation with Approximate SVD), which is designed for solving matrix rank minimization and matrix completion problems. From the site:
FPCA is short for Fixed Point Continuation with Approximate SVD for solving matrix rank minimization and matrix completion problems.
The affinely constrained matrix rank minimization (ACMRM) problem is :
A special case of ACMRM is the matrix completion (MC) problem:
That is, a subset of entries of low rank matrix M is known and we want to fill in the unknown entries.
The tightest convex relaxation of ACMRM is its Nuclear norm relaxation:
Note that if b is contaminated by noise, we should relax this problem to:
FPCA solves the Lagrangian version of the above problem:
This problem is also called the nuclear norm regularized least squares problem.
The website is at : http://www.columbia.edu/~sm2756/FPCA.htm . Thank you Shiqian!
With regards to new papers, I just found this one on arxiv:
Zvika Ben-Haim, Yonina Eldar and Michael Elad, Near-Oracle Performance of Basis Pursuit Under Random Noise. The abstract reads:
Zvika Ben-Haim, Yonina Eldar and Michael Elad, Near-Oracle Performance of Basis Pursuit Under Random Noise. The abstract reads:
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined noisy measurements, in which the noise is a Gaussian random vector. Two techniques which are commonly used in this setting are the Dantzig selector and basis pursuit denoising (BPDN). It has previously been shown that, with high probability, the Dantzig selector comes close to the performance of the oracle estimator which knows the locations of the nonzero components of x0, when the performance is measured by the L2 distance between x0 and its estimate. In this paper, we demonstrate that BPDN achieves analogous results, but that the constants involved in the BPDN analysis are significantly better.Image Credit: NASA/JPL/Space Science Institute, Epimetheus.
1 comment:
the link for the FPCA code cannot not be accessed. Is it caused by the access control or the code is not available under the link any more? How can I access to the code?
Post a Comment