Pages

Tuesday, September 22, 2009

CS:Sapphire, CS with Probabilistic Measurements: A Group Testing Solution, Quantum state tomography, Rate-Distortion of CS, Guaranteed sparse recovery

This is interesting, Project Sapphire is now a public story. Back in the days (1995), we had a presentation of it at the Nuclear Engineering Department seminar on what happened but then for reasons not known to us, it was never mentioned in detail in a public setting again.



From the article, an action that reminded me of a scene in The Peacemaker (without the adversarial action!):

..."And these trucks were sliding all over the place, and I'm thinking, 'I don't want to make the call to Washington saying one of the trucks with highly enriched uranium went off the bridge into the river, and we're trying to locate it.' But somehow, miraculously, we made it all safely to the airport."

It took three hours to load the plane. But before it could take off, the runway had to be cleared of snow. Sleet, ice and rain blanketed the airfield, a pilot later recalled. There were no plows to be seen. Finally, airport workers brought out a truck with a jet engine mounted on the back. They fired up the engine and blasted the runway free of snow.

The article ends with:
In late 1994, the Joint Atomic Energy Intelligence Committee prepared a report about the extent of the Russian nuclear materials crisis. The top-secret document concluded that not a single facility storing highly enriched uranium or plutonium in the former Soviet Union had safeguards up to Western standards.

This exercise was really a lightening rod in our perception on how some of these materials had to be protected. It unleashed a series of efforts toward securing these materials and also opened the door to the disposition of excess weapons plutonium,

Finding all the nuclear materials with a detector in a closed area in the plant in Ust-Kamenogorsk could be a nice application of a new kind of group testing presented in the following eye opening paper ( yes, CS and group testing are close) :

Detection of defective members of large populations has been widely studied in the statistics community under the name “group testing”, a problem which dates back to World War II when it was suggested for syphilis screening. There the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for “viral infections”, we design non-adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1−o(1). We propose both probabilistic and explicit design procedures that require a “small” number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k2 log(n/k)/(1−p)) and m = O(k2 log2 n/(1 − p)2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in
time O(mn).
wow. I have not found a toy code implementing the algorithm but I am sure it is in the making as this paper goes beyond traditional group testing. From the paper:
The idea behind our setup is mathematically related to compressed sensing [17], [18]. Nevertheless, they differ in a significant way: In compressed sensing, the samples are gathered as linear observations of a sparse real signal and typically tools such as linear programming methods is applied for the reconstruction. To do so, it is assumed that the decoder knows the measurement matrix a priori. However, this is not the case in our setup. In other words, using the language of compressed sensing, in our scenario the measurement matrix might be “noisy” and is not precisely known to the decoder. As it turns out, by using a sufficient number of agents this issue can be resolved.
Let us thank the authors for making this point clear to its compressed sensing audience. It becomes clear that this approach opens the door to new kinds of intelligent sensors with logging capabilities. Also I note the different application of group testing in their papers:
This is the main conceptual idea behind the classical group testing problem which was introduced by Dorfman [1] and later found applications in variety of areas. A few examples of such applications include testing for defective items (e.g., defective light bulbs or resistors) as a part of industrial quality assurance [2], DNA sequencing [3] and DNA library screening in molecular biology (see, e.g., [4], [5], [6], [7], [8] and symbols represent infected people among healthy people indicated by • symbols. The dashed lines show the people contacted by the agents. the references therein), multiaccess communication [9], data compression [10], pattern matching [11], streaming algorithms [12], software testing [13], and compressed sensing [14]. See the books by Du and Hwang [15], [16] for a detailed account of the major developments in this area.
I will surely dig in these references in the future. Also found on the interwebs:

Quantum state tomography via compressed sensing by David Gross, Yi-Kai Liu, Steven T. Flammia, Stephen Becker, Jens Eisert. The abstract reads:

We establish novel methods for quantum state and process tomography based on compressed sensing. Our protocols require only simple Pauli measurements, and use fast classical post-processing based on convex optimization. Using these techniques, it is possible to reconstruct an unknown density matrix of rank r using O(r d log^2 d) measurement settings, a significant improvement over standard methods that require d^2 settings. The protocols are stable against noise, and extend to states which are approximately low-rank. The acquired data can be used to certify that the state is indeed close to a low-rank one, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations.
On the Empirical Rate-Distortion Performance of Compressive Sensing by Adriana Schulz, Luiz Velho, Eduardo A. B. da Silva. The abstract reads:
Compressive Sensing (CS) is a new paradigm in signal acquisition and compression that has been attracting the interest of the signal compression community. When it comes to image compression applications, it is relevant to estimate the number of bits required to reach a specific image quality. Although several theoretical results regarding the rate-distortion performance of CS have been published recently, there are not many practical image compression results available. The main goal of this paper is to carry out an empirical analysis of the rate-distortion performance of CS in image compression. We analyze issues such as the minimization algorithm used and the transform employed, as well as the trade-off between number of measurements and quantization error. From the experimental results obtained we highlight the potential and limitations of CS when compared to traditional image compression methods.

The paper is not available but the code permitting the production of this paper can be found here. It features a Noiselet code, I wonder how different it is from this one.

Finally, we have: A note on guaranteed sparse recovery via $\ell_1$-minimization by Simon Foucart. The abstract reads:
It is proved that every s-sparse vector x 2 CN can be recovered from the measurement vector y = Ax element of C^m via l_1-minimization as soon as the 2s-th restricted isometry constant of the matrix A is smaller than  about 0.4627, or smaller than about 0.4679 for large values of s.

No comments:

Post a Comment