Page Views on Nuit Blanche since July 2010
Nuit Blanche community
@NuitBlog || Facebook || Reddit
Compressive Sensing on LinkedIn
Advanced Matrix Factorization on Linkedin ||
Wednesday, April 25, 2007
Finite Rate of Innovation is the reason why S_n works in Neutron Transport ?
One of the most annoying issue with respect to neutron transport using the Linear Bolztmann equation, is the very large family of solution techniques. One of the underlying reason is the dimensionality of the problem. At the very least the problem is dimension 2 but it can go all the way to dimension 7. Another reason is the flurry of solution techniques being used. One of them is Caseology wherein one devises the solution of a problem by projecting it on a set of eigenfunctions known as Case eigenfunctions. Other solution techniques that are more ad-hoc include projection onto a set of Spherical Harmonic functions (P_N solution technique) or a set of Dirac functions (S_N solution or Discrete Ordinates). Yet in neither cases is there a good way to stop these computations except through the use of ad-hoc criterias. Then compressed sensing happened with its flurry of unusual results. This next one looks like one of those.
Pier Luigi Dragotti, Martin Vetterli Thierry Blu just wrote on Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang-Fix [1] were they show that if you were to project a certain number of diracs onto a family of scaling and wavelet functions, all the information you need to reconstruct the full signal (all the diracs) is already "stored" in the scaling functions coefficients. This is obviously of clear interest to people who want to do superresolution and is also probably the reason why P_n or S_n actually work.
It may even provide a good starting point to solving the transport equation using the multi-dimensional polynomials found by Jacques Devooght [2].
On a side note, there is also this nagging ressemblance between the kernels used by Dragotti, Vetterli and Blu and the Placzek function central to neutron thermalization in neutron transport. The uptake from this is that most sources are dirac-like and the Placzek function is really the Green's function providing an idea on how that source will downscatter in some infinite medium. In other words, the "simplified" linear transport equation (in energy) is one of these Finite-rate innovation kernels found by Dragotti, Vetterli and Blu.
[1] Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang-Fix, P.L. Dragotti, M. Vetterli, T. Blu, IEEE Transactions on Signal Processing, vol. 55, no. 5, part 1, pp. 1741-1757, May 2007.
[2] Jacques Devooght, New elementary solutions of the monoenergetic Boltzmann equation result Transport Theory and Statistical Physics; Volume 20 No. 5 Page 383
Tuesday, April 24, 2007
DARPA Urban Challenge: The basics
One of the earliest milestone that an autonomous car needs to reach for the competition is to be able to follow GPS waypoints. Since I don't have too much time to detail Kalman filtering using GPS and an IMU, I will point to the explanation and implementation of a software that does just that for the Texas A&M University autonomous car that was never entered in the competition. Two theses were filed on the subject matter:
One was written by James Massey on Control and way point navigation of an autonomous ground vehicle
It is available at: http://handle.tamu.edu/1969.1/3862
The other one was written by Graig Odom entitled Navigation solution for the Texas A&M autonomous ground vehicle
I'll mention later how our vehicle differs from this approach.
One was written by James Massey on Control and way point navigation of an autonomous ground vehicle
Abstract: This thesis describes the initial development of the Texas A&M Autonomous Ground Vehicle test platform and waypoint following software, including the associated controller design. The original goal of the team responsible for the development of the vehicle was to enter the DARPA Grand Challenge in October 2005. A 2004 Ford F150 4x4 pickup was chosen as the vehicle platform and was modified with a 6Â suspension lift and 35Â tires, as well as a commercial drive-by-wire system. The waypoint following software, the design of which is described in this thesis, is written in C and successfully drives the vehicle on a course defined by GPS waypoints at speeds up to 50 mph. It uses various heuristics to determine desired speeds and headings and uses control feedback to guide the vehicle towards these desired states. A vehicle dynamics simulator was also developed for software testing. Ultimately, this software will accept commands from advanced obstacle avoidance software so that the vehicle can navigate in true off-road terrain.
The other one was written by Graig Odom entitled Navigation solution for the Texas A&M autonomous ground vehicle
Abstract: The need addressed in this thesis is to provide an Autonomous Ground Vehicle (AGV) with accurate information regarding its position, velocity, and orientation. The system chosen to meet these needs incorporates (1) a differential Global Positioning System, (2) an Inertial Measurement Unit consisting of accelerometers and angular-rate sensors, and (3) a Kalman Filter (KF) to fuse the sensor data. The obstacle avoidance software requires position and orientation to build a global map of obstacles based on the returns of a scanning laser rangefinder. The path control software requires position and velocity. The development of the KF is the major contribution of this thesis. This technology can either be purchased or developed, and, for educational and financial reasons, it was decided to develop instead of purchasing the KF software. This thesis analyzes three different cases of navigation: one-dimensional, two dimensional and three-dimensional (general). Each becomes more complex, and separating them allows a three step progression to reach the general motion solution. Three tests were conducted at the Texas A&M University Riverside campus that demonstrated the accuracy of the solution. Starting from a designated origin, the AGV traveled along the runway and then returned to the same origin within 11 cm along the North axis, 19 cm along the East axis and 8 cm along the Down axis. Also, the vehicle traveled along runway 35R which runs North-South within 0.1ð, with the yaw solution consistently within 1ð of North or South. The final test was mapping a box onto the origin of the global map, which requires accurate linear and angular position estimates and a correct mapping transformation.http://handle.tamu.edu/1969.1/4244
I'll mention later how our vehicle differs from this approach.
Saturday, April 21, 2007
Bayesian Search and Rescue
While some people are still looking for Jim Gray there are a number of issues that eventually need to be investigated at the Science and Technology level. While the current SAROPS capabilities of the Coast Guards are very impressive, there may be ways to improve some of its already powerful capabilities. I recently came across a technique that could probably have helped in solving the Tenacious Challenge. It is entitled: Coordinated Decentralized Search for a Lost Target in a Bayesian World [1] by Frederic Bourgault, Tomonari Furukawa and Hugh F. Durrant-Whyte. The abstract reads as follows:
and looks like the type of approach I was mentioning earlier. This one article takes as a starting point the tragedy of the 1979 Fastnet race. As it turns out, another Tenacious won that race. I will be sharing my thoughts on this technique and other possible improvements in a future entry. A related subject of interest is sensor networks since we had a mix of different sensors watching the same areas at different times.
[1] Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. Publication Date: 27-31 Oct. 2003, Volume: 1, On page(s): 48- 53 vol.1.
This paper describes a decentralized Bayesian approach to coordinating multiple autonomous sensor platforms searching for a single non-evading target. In this architecture, each decision maker builds an equivalent representation of the target state PDF through a Bayesian DDF network enabling him or her to coordinate their actions without exchanging any information about their plans. The advantage of the approach is that a high degree of scalability and real time adaptability can be achieved. The effectiveness of the approach is demonstrated in different scenarios by implementing the framework for a team of airborne search vehicles looking for a stationary, and a drifting target lost at sea.
and looks like the type of approach I was mentioning earlier. This one article takes as a starting point the tragedy of the 1979 Fastnet race. As it turns out, another Tenacious won that race. I will be sharing my thoughts on this technique and other possible improvements in a future entry. A related subject of interest is sensor networks since we had a mix of different sensors watching the same areas at different times.
[1] Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. Publication Date: 27-31 Oct. 2003, Volume: 1, On page(s): 48- 53 vol.1.
Thursday, April 19, 2007
Approaching a Random Radiation Detector
We are coming closer to the interaction between integral equation solving and compressed sensing. Case in point the new preprint from Lawrence Carin, Dehong Liu, and Ya Xue at Duke University entitled In Situ Compressive Sensing.
They use an additional known heterogenous medium in order to project the unknown original signal onto a new basis. This is not unlike the random lens imager of Rob Fergus, Antonio Torralba, and William T. Freeman at MIT where that heterogenous material is the random lens. Much work goes into calibrating that random lens using known images. Similarly here, the known heterogenous medium has a transfer function that is very well known (after calibration) and one uses this transfer function to produce "new" measurements or "new random projections" from the original unknown signal. These new signals are the CS measurements. Most of the work performed rests in the calibration/knowledge of this heterogenous medium in order to eventually extract information from the compressed measurements. The work pays off because, in the end, not only do you image the source, but you can also invert the whole field, i.e. you see everything between the source and your sensor.
What is interesting is the use of the Green's function for the Helmhotz equation because it clearly is a path toward the solving of the Linear Boltzmann Equation (which has a Helmhotz equation component for the discrete eigenvalues of the Case Eigenfunctions). [For more reference on Caseology, look up articles in TTSP and JQRST. This opens the door to radiation sensors (not just electromagnetic radations, but particle detection as well).]
Returning to the In-situ compressive sensing paper, the authors state:
Rather than measuring an antenna pattern in the far zone and in free space as a function of angle, the antenna may be placed within two large plates (or other multi-path-generating medium) and a relatively small number of measurements may be performed, with the angle and frequency dependent radiation pattern inferred using CS techniques like those considered here.One can begin to see how Compressed Sensing is going to be implemented in other radiation fields. In Compressive Radar Imaging by Richard Baraniuk and Philippe Steeghs, the idea is to play on the dissimilarity between the signal and the target thereby making explicit the assumptions made on the target of interest. Whereas here, the expectation is for an additional hardware (a random but well known material) to produce the compressed measurements. This is done irrespective to any information on the target because in this case, it is difficult to modulate the signal.
.... This suggests that the in situ CS technique may be used to perform measurements of evanescent fields, with this also supported by the theory presented in Sec. III. Note that such measurements require one to place a well-understood heterogeneous medium in the presence of evanescent fields, and therefore it is of interest to explore linkages to similar measurements performed with negative-index artificial materials. In particular, it may be worthwhile to investigate whether such metamaterials constitute a particularly attractive complex medium through which to perform in situ CS measurements, for sensing evanescent fields.
In the radiation business, sources are difficult to modulate so it is likely that the in-situ approach will likely be favored even though the radar approach may be more suitable for radiation interrogation techniques. One can imagine that instead of using the spatial decomposition of the signal, it might be more relevant to use the energy spectrum make-up of the target of interest. A good knowledge of the heterogenous material could be performed using Monte-Carlo techniques (MCNP) and detection of the element of interest could be performed using the smashed filter approach. With regards to the solution technique of the deterministic Botlzmann equation, the Green's function [1] has been known for quite some time, so it may be a matter of coupling it to "random sources" in order to produce a new solution technique.
[1] G. J. Mitsis,Transport Solutions to the Monoenergetic Critical Problem. ANL-6787, Argonne. National Laboratory (1963)
Tuesday, April 17, 2007
Compressed Sensing in the Primary Visual Cortex ?
[Update, please note that the Compressed Sensing Blog is at this address: http://nuit-blanche.blogspot.com/search/label/CS]
Thomas Serre, Aude Oliva and Tomaso Poggio just came out with a paper showing that the brain processes information in a feedforward fashion. i.e. in most of the brain architecture, there is no feedback loop. It's a breakthrough since even though the biology seemed to show that, there was little computational modeling that could support this hypothesis. Most of the modeling is featured in Thomas Serre's Ph.D thesis entitled:
Learning a dictionary of shape-components in visual cortex: Comparison with neurons, humans and machines, PhD Thesis, CBCL Paper #260/MIT-CSAIL-TR #2006-028, Massachusetts Institute of Technology, Cambridge, MA, April, 2006
I hinted on this earlier, but compressed sensing seems to be such a robust technique that there is little reason to believe that it is not part of a biological process in the works in the brain. Then I found this following statement in page 3 of the preprint of the PNAS paper (Thanks to Thomas Serre I found it in the final paper, it is in the footnote section here) :
Functional organization:Layers in the model are organized in feature maps which may be thought of as columns or clusters of units with the same selectivity (or preferred stimulus) but with receptive fields at slightly different scales and positions (see Fig. S 1). Within one feature map all units share the same selectivity, i.e., synaptic weight vector w which is learned from natural images (see subsection A.1.2).There are several parameters governing the organization of individual layers: K_X is the number of feature maps in layer X. Units in layer X receive their inputs from a topologically related N_X × N_X × S_X, grid of possible afferent units from the previous layer where NX defines a range of positions and SX a range of scales.Simple units pool over afferent units at the same scale, i.e., SSk contains only a single scale element. Also note that in the current model implementation, while complex units pool over all possible afferents such that each unit in layer Ck receives nCk = NS Ck × NS Ck × SCk , simple units receive only a subset of the possible afferent units (selected at random) such that nSk < NSk × NSk (see Table S 1 for parameter values).Finally, there is a downsampling stage from Sk to Ck stage. While S units are computed at all possible locations, C units are only computed every Ck possible locations. Note that there is a high degree of overlap between units in all stages (to guarantee good invariance to translation). The number of feature maps is conserved from Sk to Ck stage, i.e., KSk = KCk. The value of all parameters is summarized in Table S 1.
So it looks like that in this layered approach to stimuli understanding, the current modeling allows for randomly picking some stimuli out of several in order to go to a higher level of synthesis. This approach is very similar to compressed sensing or some of the concept developed in the Uniform Uncertainty Principle (since we know that natural images for the most part can be sparse in the fourier domain) developed by Terry Tao, Emmanuel Candes and Justin Romberg. Two features of this model can be mapped to the Compressed Sensing approach: a feedback mechanism could be mapped into the usual transform coding approach (compute all the wavelets coefficients and take only the largest ones) whereas Compressed Sensing avoids the nonlinear process of the feedback mechanism. Random sampling is the current best approach to provide a uniform sampling strategy irrespective to most known basis (sines, cosines, wavelets, curvelets,....)
Monday, April 16, 2007
Flower Constellations in Space come in Squares
Why is it that simple shapes in astronomy make the news when they could be explained away by the framework that produces flower constellations where diophantine equations are solved ?
Why is it that simple shapes in fluid mechanics make the news when they can already be observed in the lab ?
Are we mesmerized by the straight lines or by the sharp corners ?
Why is it that simple shapes in fluid mechanics make the news when they can already be observed in the lab ?
Are we mesmerized by the straight lines or by the sharp corners ?
Saturday, April 14, 2007
It's not about the sensor, it's about the target
Terry Tao just wrote an entry in his blog trying to explain compressed sensing using the lens of the one-pixel camera from Rice. It's been my experience that the compressed sensing results are so unusual in terms of engineering, that, as proved by the length of Terry's post, there is a lot of explaining to do. On top of it, the use of the wording "compressed sensing" or "compressive sampling" or "compressive sensing" automatically gets engineers to think of compression as implemented in the JPEG/JPEG2000 sense (transform coding), so much of the explanation on compressed sensing goes toward explaining why this is strictly not that (transform coding). So while most people's question is whether this new type of camera change the digital world, it does not come accross that the one of the force of the technique is the tremendous simplification and robustness of detection algorithms. This is not trivial as image processing always involved a lot of parameter tweaking in order to obtain "the best result". It should help.
Sunday, April 08, 2007
DARPA Urban Challenge: Unveiling our algorithm
In a previous entry, I mentioned the fact that we are unveiling our strategy.unveiling our strategy for our entry in the DARPA Urban Challenge (DARPA Urban Challenge is about driving at a record pace in some urban environment past many urban difficulties, including no GPS capability). This was not accurate, in fact, we really are going to unveil our algorithm. You'll be privy of the development quirks and everything that goes on implementing an algorithm that has to respond on-line to a challenging situation. I'll be talking on the history of why we are choosing specific algorithms over others. I will specifically talk more about manifold-based models for decision making in the race and the use of techniques devised to produce a storage device of previous actions in order to produce some sort of supervised learning capability. In the previous race, we were eliminated early mostly because we were plagued with mechanical problems that most of us had never faced before (none of us had robotics background), we hope to go farther this time as the vehicle is OK. For reference, we have already shown some of our drive by wire program before as well. We made some of our data available before and I expect, as time permit to do the same as we go along. Because our entry is truely innovative, we are trying to balance not getting eliminated by passing every steps of the application and those innovation in the algorithm. However, since all of us are not interested in just an autonomous car, our emphasis will always be on the side of doing something that most other entries are not attempting such as using compressed sensing and robust mathematical techniques for instance.
Sunday, April 01, 2007
Compressed Sensing Illustrated
Here is a good explanation as to why compressed sensing is indeed universal sensing. It stores the most important information that describes the target. It's not about the sensor, it's about the target.
Subscribe to:
Posts (Atom)