Thursday, November 30, 2017

Deep Generative Adversarial Networks for Compressed Sensing Automates MRI - implementation - / Recurrent Generative Adversarial Networks for Proximal Learning and Automated Compressive Image Recovery

Morteza just sent me the following awesome use of GANs that is eerily close to the dichotomy between the analysis and the synthesis approach in compressive sensing (I look forward to the use of GANs in learning field equations). Here is how he describes his recent work:


Hi Igor,

I would like to share with you and the Nuit Blanche reader our recent series of work on using generative models for compressed sensing.

We initially started using deep GANs for retrieving diagnostic quality MR images. Our observations in https://arxiv.org/abs/1706.00051 are quite promising!! The discriminator network can play the role of a radiologist to score the perceptual quality of retrieved MR imges.

In order to reduce the train and test overhead for real-time applications, we then designed a recurrent generative network that unrolls the proximal gradient iterations. We use ResNets and the results are really interesting!! A simple single residual block repeated for a few times can accurately learn the proximal, and outperform the conventional CS-Wavelet by around 4dB. The results are reported in https://arxiv.org/pdf/1711.10046.pdf.

It would be great if you could share this news with your readers!!

Thanks,
Morteza
Thanks Morteza !
 
 
Deep Generative Adversarial Networks for Compressed Sensing Automates MRI by Morteza Mardani, Enhao Gong, Joseph Y. Cheng, Shreyas Vasanawala, Greg Zaharchuk, Marcus Alley, Neil Thakur, Song Han, William Dally, John M. Pauly, Lei Xing
Magnetic resonance image (MRI) reconstruction is a severely ill-posed linear inverse task demanding time and resource intensive computations that can substantially trade off {\it accuracy} for {\it speed} in real-time imaging. In addition, state-of-the-art compressed sensing (CS) analytics are not cognizant of the image {\it diagnostic quality}. To cope with these challenges we put forth a novel CS framework that permeates benefits from generative adversarial networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR images from historical patients. Leveraging a mixture of least-squares (LS) GANs and pixel-wise 1 cost, a deep residual network with skip connections is trained as the generator that learns to remove the {\it aliasing} artifacts by projecting onto the manifold. LSGAN learns the texture details, while 1 controls the high-frequency noise. A multilayer convolutional neural network is then jointly trained based on diagnostic quality images to discriminate the projection quality. The test phase performs feed-forward propagation over the generator network that demands a very low computational overhead. Extensive evaluations are performed on a large contrast-enhanced MR dataset of pediatric patients. In particular, images rated based on expert radiologists corroborate that GANCS retrieves high contrast images with detailed texture relative to conventional CS, and pixel-wise schemes. In addition, it offers reconstruction under a few milliseconds, two orders of magnitude faster than state-of-the-art CS-MRI schemes.
 An implementation is here.

Recurrent Generative Adversarial Networks for Proximal Learning and Automated Compressive Image Recovery by Morteza Mardani, Hatef Monajemi, Vardan Papyan, Shreyas Vasanawala, David Donoho, John Pauly

Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually "plausible" and physically "feasible" images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly