Here is an implementation that caught my eye: A reconstruction solver using an FPGA. It is not so new per se, but what caught my eye is how sparsity is rearing its head in the algorithmic implementation. First, the minimization is enforced on a combination of TV and EM arguments.on top of a positivity constraint. More work on that line of research can be found in references  and , as one can see, there is no sparsity enforcement from the get go. Here is the paper: FPGA-Accelerated 3D Reconstruction Using Compressive Sensing by Jianwen Chen, Jason Cong, Ming Yan and Yi Zou. The abstract reads:
The radiation dose associated with computerized tomography (CT) is signiﬁcant. Optimization-based iterative reconstruction approaches, e.g., compressive sensing provide ways to reduce the radiation exposure, without sacriﬁcing image quality. However, the computational requirement such algorithms is much higher than that of the conventional Filtered Back Projection (FBP) reconstruction algorithm. This paper describes an FPGA implementation of one important iterative kernel called EM, which is the major computation kernel of a recent EM+TV reconstruction algorithm. We show that a hybrid approach (CPU+GPU+FPGA) can deliver a better performance and energy efﬁciency than GPU-only solutions, providing 13X boost of throughput than a dual-core CPU implementation.
So, how is sparsity enforced ?
3.5 Reducing the Data Accesses via SparsityThe ﬁnal output image of the compressive sensing algorithm is sparse. Also we know that the image voxel value is non-negative. Based on these two facts, we develop a simple heuristic to reduce the amount of data access. In the beginning of the iteration, we perform a single forward projection. If any accumulated sinogram value falls below a threshold, we conclude that any image value on that ray shall be close to zero. Based on this, we build a mask of the image called image_denote. When we do the backward projection, we only update the voxels that are not masked. Note that this mask only need 1-bit data, so we merge this 1-bit data into the imageData array. Through this way, we reduce the number of data access in the backward projection. Figure 6 shows the modiﬁed pseudo code.
uh, heuristics! Two thoughts come to my mind:
- In , one can see that sparsity enforcement is not obvious, i.e. there is just a TV+EM+positivity minimization/constraint, how come one can get already good subsampling results ? Is the EM+TV enforcing sparsity in a way that is not obvious ?
- Using heuristics like this would make it an ideal algorithm for a GraphLab implementation. If it can be implemented on the Cloud, it will always beat an FPGA/GPU implementation.
 Expectation Maximization and Total Variation Based Model for Computed Tomography Reconstruction from Undersampled Data by Ming Yan and Luminita A. Vese EM+TV for Reconstruction of Cone-beam CT with Curved Detectors using GPU by Jianwen Chen , Ming Yan , Luminita A. Vese, John Villasenor, Alex Bui, and Jason Cong
 EM+TV Based Reconstruction for Cone-Beam CT with Reduced Radiation by Ming Yan , Jianwen Chen , Luminita A. Vese, John Villasenor, Alex Bui, and Jason Cong
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.