Pages

Thursday, June 13, 2013

PyHST2: an hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities = implementation-




We present the PyHST2 code which is in service at ESRF for phase-contrast and absorption tomography. This code has been engineered to sustain the high data flow typical of the third generation synchrotron facilities (10 terabytes per experiment) by adopting a distributed and pipelined architecture. The code implements, beside a default filtered backprojection reconstruction, iterative reconstruction techniques with a-priori knowledge. These latter are used to improve the reconstruction quality or in order to reduce the required data volume and reach a given quality goal. The implemented a-priori knowledge techniques are based on the total variation penalisation and a new recently found convex functional which is based on overlapping patches.
We give details of the different methods and their implementations while the code is distributed under free license.
We provide methods for estimating, in the absence of ground-truth data, the optimal parameters values for a-priori techniques.
PyHST2 is available here at: http://forge.epn-campus.eu/projects/pyhst2

I note three items that are bound to be part of our collective experience as we are surfing Moore's law. From this outstanding paper, one can read:  

  • "....10 terabytes per experiment..." Memory is cheap, if people can record it, they will. 
  • "...The total data volume is often larger than the available memory...." if one were to perform these experiment often, no make that very very often, then we would have to go into streaming algorithms. Right now, the science is such that performing these computation is a day's worth and a few of these computations are probably behind one of two thesis' work. In ten year's time, that experiment could be done every ten seconds. At that point, the algorithms will have to catch up and while memory will still be cheap, we will require data reconstruction at a faster pace.
  • "...If the ground truth is not available, statistical methods such as the discrepancy principle [19] or generalized cross-validation [20] can be used to select the optimal regularization parameter. Such methods could be implemented in future versions of PyHST...." At some point, we will have to go to a grammar based exploratory approach. The choosing of parameters while worth it, is a reminder that we don't know very well the underlying structure of the signal.  

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment