Wednesday, June 20, 2018

Deep Mesh Projectors for Inverse Problems - implementation -

Ivan just let me know of the following instance of the Great Convergence:
Dear Igor,

A few weeks ago you featured two interesting papers that use random projections to train robust convnets (

I wanted to let you know about our related work that is a bit different in spirit: we learn to solve severely ill-posed inverse problems by learning to reconstruct low-dimensional projections of the unknown model instead of the full model. When we choose the low-dimensional subspaces to be piecewise-constant on random meshes, the projected inverse maps are much simpler to learn (in terms of Lipschitz stability constants, say), leading to a comparably better behaved inverse.

If you’re interested, the paper is here:

and the code here:

I would be grateful if you could advertise the work on Nuit Blanche.

Best wishes,
thanks Ivan !

We develop a new learning-based approach to ill-posed inverse problems. Instead of directly learning the complex mapping from the measured data to the reconstruction, we learn an ensemble of simpler mappings from data to projections of the unknown model into random low-dimensional subspaces. We form the reconstruction by combining the estimated subspace projections. Structured subspaces of piecewise-constant images on random Delaunay triangulations allow us to address inverse problems with extremely sparse data and still get good reconstructions of the unknown geometry. This choice also makes our method robust against arbitrary data corruptions not seen during training. Further, it marginalizes the role of the training dataset which is essential for applications in geophysics where ground-truth datasets are exceptionally scarce.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

No comments: