I do not really understand how Distributed Sensor Networks benefit from CS. In general the main constraint in DSN is the battery life of the sensors. Now the battery is consumed for communications. Therefore the final aim is to reduce the communication cost of each sensor. Now, in general if there are n sensors each one will send one piece of data to the fusion center. But by CS techniques each center needs to multiple its values to a k dimensional vector and send out k pieces of data. What CS argues is that all the sensors need to send out k pieces of data, and they are added in air by the superposition of radio waves, and the fusion center decodes the n dimensional vector from the k dimensional vector. The problem is that each sensor is sending out k pieces instead of one. Therefore it is likely that the battery burns our faster. May be I haven't understood the problem. Can you explain? Or who is the right person to enquire about this?
FPCA is short for Fixed Point Continuation with Approximate SVD for solving matrix rank minimization and matrix completion problems.
The affinely constrained matrix rank minimization (ACMRM) problem is :
A special case of ACMRM is the matrix completion (MC) problem:
That is, a subset of entries of low rank matrix M is known and we want to fill in the unknown entries.
The tightest convex relaxation of ACMRM is its Nuclear norm relaxation:
Note that if b is contaminated by noise, we should relax this problem to:
FPCA solves the Lagrangian version of the above problem:
This problem is also called the nuclear norm regularized least squares problem.
Zvika Ben-Haim, Yonina Eldar and Michael Elad, Near-Oracle Performance of Basis Pursuit Under Random Noise. The abstract reads:
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined noisy measurements, in which the noise is a Gaussian random vector. Two techniques which are commonly used in this setting are the Dantzig selector and basis pursuit denoising (BPDN). It has previously been shown that, with high probability, the Dantzig selector comes close to the performance of the oracle estimator which knows the locations of the nonzero components of x0, when the performance is measured by the L2 distance between x0 and its estimate. In this paper, we demonstrate that BPDN achieves analogous results, but that the constants involved in the BPDN analysis are significantly better.Image Credit: NASA/JPL/Space Science Institute, Epimetheus.