I'll be attending a few talks at the First French-German Mathematical Image Analysis Conference before eventually co-hosting the 7th Paris Machine Learning Meetup on Wednesday night. See you there.

**FGMIA 2014**

## - Program -

Monday 13 January 08:45am-09:00am - Welcome Message 09:00am-09:45am - 09:45am-10:30am - - Coffee break 11:00am-11:45am - 11:45am-12:30pm - - Lunch break 02:00pm-02:45pm - 02:45pm-03:30pm - - Coffee break 04:00pm-04:45pm - 04:45pm-05:30pm -

Tuesday 14 January 09:00am-09:45am - 09:45am-10:30am - - Coffee break 11:00am-11:45am - 11:45am-12:30pm - - Lunch break 02:00pm-02:45pm - 02:45pm-03:30pm - - Coffee break 04:00pm-04:45pm - 04:45pm-05:30pm -

Wednesday 15 January 09:00am-09:45am - 09:45am-10:30am - - Coffee break 11:00am-11:45am - 11:45am-12:30pm - - Lunch break 02:00pm-02:45pm - 02:45pm-03:30pm - - Coffee break 04:00pm-04:45pm - 04:45pm-05:30pm -## - Abstracts -

Richard Baraniuk -Learning Near-Isometric Linear Embeddings

In many machine learning and signal processing applications, we seek a low-dimensional representation (or embedding) of data that live in a high-dimensional ambient space. The classical principal components analysis (PCA) approach linearly maps the data into the lower-dimensional subspace spanned by the dominant eigenvectors of the data covariance matrix. While simple and computationally efficient, PCA suffers from an important drawback: the resulting embedding can arbitrarily distort pairwise distances between sample data points. The neoclassical random projections approach maps the data into a random lower-dimensional subspace. While also simple, random projections offer only probabilistic and asymptotic performance guarantees and, moreover, cannot leverage any special geometric structure of the data that might be present. In this talk, we introduce a new framework for the deterministic construction of linear, near-isometric embeddings of a finite set of data points. Our formulation is based on an affine rank minimization problem that we relax into a tractable semidefinite program (SDP). The resulting Nuclear norm minimization with Max-norm constraints (NuMax) framework has a number of applications in machine learning and signal processing, which we demonstrate via a range of experiments on large-scale synthetic and real datasets.

Yi Ma -The Pursuit of Low-dimensional Structures in High-dimensional Data

In this talk, we will discuss a new class of models and techniques that can effectively model and extract rich low-dimensional structures in high-dimensional data such as images and videos, despite nonlinear transformation, gross corruption, or severely compressed measurements. This work leverages recent advancements in convex optimization for recovering low-rank or sparse signals that provide both strong theoretical guarantees and efficient and scalable algorithms for solving such high-dimensional combinatorial problems. These results and tools actually generalize to a large family of low-complexity structures whose associated regularizers are decomposable. We illustrate how these new mathematical models and tools could bring disruptive changes to solutions to many challenging tasks in computer vision, image processing, and pattern recognition. We will also illustrate some emerging applications of these tools to other data types such as web documents, image tags, microarray data, audio/music analysis, and graphical models.

This is joint work with John Wright of Columbia, Emmanuel Candes of Stanford, Zhouchen Lin of Peking University, and my students Zhengdong Zhang, Xiao Liang of Tsinghua University, Arvind Ganesh, Zihan Zhou, Kerui Min and Hossein Mobahi of UIUC.

Anders Hansen -Compressed sensing in the real world - The need for a new theory

Compressed sensing is based on the three pillars: sparsity, incoherence and uniform random subsampling. In addition, the concepts of uniform recovery and the Restricted Isometry Property (RIP) have had a great impact. Intriguingly, in an overwhelming number of inverse problems where compressed sensing is used or can be used (such as MRI, X-ray tomography, Electron microscopy, Reflection seismology etc.) these pillars are absent. Moreover, easy numerical tests reveal that with the successful sampling strategies used in practice one does not observe uniform recovery nor the RIP. In particular, none of the existing theory can explain the success of compressed sensing in a vast area where it is used. In this talk we will demonstrate how real world problems are not sparse, yet asymptotically sparse, coherent, yet asymptotically incoherent, and moreover, that uniform random subsampling yields highly suboptimal results. In addition, we will present easy arguments explaining why uniform recovery and the RIP is not observed in practice. Finally, we will introduce a new theory that aligns with the actual implementation of compressed sensing that is used in applications. This theory is based on asymptotic sparsity, asymptotic incoherence and random sampling with different densities. This theory supports two intriguing phenomena observed in reality: 1. the success of compressed sensing is resolution dependent, 2. the optimal sampling strategy is signal structure dependent. The last point opens up for a whole new area of research, namely the quest for the optimal sampling strategies.

Deanna Needell -What we know and what we do not know about practical compressive sampling

Compressive sampling (CS) is a new and exciting technology which demonstrates that to recover compressible signals, far fewer samples are needed than traditional sampling theory suggests. Although the theory of CS is quite extensive for the now standard setting of incoherence and approximate sparsity, the practical abilities of CS are not always well understood. In this talk we will discuss several particular aspects of CS necessary for applications including dictionary or sampling correlations, quantization, and constrained measurements. New results showing whether CS is feasible in these settings are discussed, as well as some important directions that are not yet understood. We ask when and under what conditions can the power of CS actually be utilized, as well as with what methods accurate recovery can be achieved.

Mario Figueiredo -Some Recent Advances on the Use of ADMM in Imaging: Non-Periodic and Blind Deconvolution

The alternating direction method of multipliers (ADMM) has shown to be a flexible and efficient convex optimization tool for a variety of imaging inverse problems, under non-smooth convex regularization. ADMM exhibits state-of-the-art speed if the subproblems that constitute each iteration are efficiently solvable (e.g., using fast transforms or simple proximity operators). In deconvolution, one of these subproblems involves a matrix inversion, which can be done efficiently (by FFT) under periodic boundary conditions. We show how to extend ADMM-based image deconvolution to the realistic scenario of unknown boundaries. The proposed approach also handles, at no extra cost, problems that combine inpainting with deconvolution. In a another direction, we show how ADMM can be used to address blind deconvolution, achieving highly competitive results, with very few assumptions about the unknown convolution kernel.

Kristian Bredies -Total generalized variation: From regularization theory to applications in imaging

Total generalized variation (TGV) functionals have shown to be effective penalties for variational imaging problems which allow to selectively regularize on different levels of smoothness. In particular, they are edge-aware similar to the total variation (TV) but also incorporate higher-order information leading to the absense of typical TV-induced artifacts like staircasing. Their convexity is moreover a convenient feature for algorithm design and numerical computations. In this talk, we start with discussing fundamental regularization properties of TGV for symmetric tensors in a functional-analytic framework. It turns out that TGV constitutes a regularizer for inverse problems in essentially the cases where it is possible to regularize with TV. We then discuss a general algorithmic framework which is suitable for the solution of TGV-regularized problems. The applicability to standard imaging problems like denoising, deblurring and compressed sensing is shown. Furthermore, we present various applications, in particular in magentic resonance imaging and image decompression, where with the help of TGV, one is able to obtain state-of-the-art high-quality reconstructions.

Xiaoqun Zhang -Primal-dual fixed point algorithms for separable minimization problems and their applications in imaging

In recent years, the minimization of a sum of two convex functions has received considerable interests in variational image restoration models. In this talk, I will present a general algorithmic framework for solving separable convex minimization problems with/without constraints from the point of view of fixed point algorithms and proximity operators. Convergence and rate analysis will be discussed under some assumptions. The efficiency and the advantages of the proposed algorithmic framework will be illustrated through applications to image restoration, parallel magnetic resonance imaging and computerized tomography. Finally, I will also discuss the perspective of application to nonconvex minimization problems such as regularized nonlinear inverse problems.

Raymond Chan -A Two-stage Image Segmentation Method using a Convex Variant of the Mumford-Shah Model and Thresholding

The Mumford-Shah model is one of the most important image segmentation models, and has been studied extensively in the last twenty years. In this talk, we propose a two-stage segmentation method based on the Mumford-Shah model. The first stage of our method is to find a smooth solution g to a convex variant of the Mumford-Shah model. Once g is obtained, then in the second stage, the segmentation is done by thresholding g into different phases. The thresholds can be given by the users or can be obtained automatically using any clustering methods. Because of the convexity of the model, g can be solved efficiently by techniques like the split-Bregman algorithm or the Chambolle-Pock method. We prove that our method is convergent and the solution g is always unique. In our method, there is no need to specify the number of segments K (K >= 2) before finding g. We can obtain any K-phase segmentations by choosing (K-1) thresholds after g is found in the first stage; and in the second stage there is no need to recompute g if the thresholds are changed to reveal different segmentation features in the image. Experimental results show that our two-stage method performs better than many standard two-phase or multi-phase segmentation methods for very general images, including anti-mass, tubular, MRI, noisy, and blurry images; and for very general noise models such as Gaussian, Poisson and multiplicative Gamma noise.

Otmar Scherzer -Mathematical Methods for Photoacoustical Imaging

In this talk we give an overview on (quantitative) photoacoustical imaging. In particular we emphasize on some recent trends in photoacoustical imaging, such as sectional imaging, where opposed to standard Photoacoustics, only certain regions of an object are illuminated. We present backprojection formulas in this case. This focusing method also allows for the recovery of scattering and absorption parameters, i.e., quantitative imaging. A new measurement setup for this purpose is studied. Another advantage of sectional imaging is that it can be used as an approach for simultaneous identification of the absorption density (imaging parameter of Photoacoustics) and the speed of sound via photoacoustic measurements. The mathematical model for such an experiment is developed and exact reconstruction formulas for both parameters are presented. This is joint work with P.Elbau (University of Vienna), A. Kirsch (University Karlsruhe), R. Schulze (University Innsbruck).

Birsen Yazici -Microlocal Analysis in Synthetic Aperture Imaging

Microlocal analysis is the abstract theory of Fourier Integral Operators (FIO) and singularities. An FIO can be viewed as a generalized Radon transform. Microlocal analysis is important to imaging since generalized Radon transforms arise in a wide range of tomographic imaging problems. This talk will introduce basics of microlocal analysis and describe a novel synthetic aperture imaging modality inspired by this theory.

Adel Faridani -On $\pi$-line reconstruction formulas in tomography

$\pi$-line formulas are a family of exact inversion formulas for recovering a function from its line integrals in two or three dimensions. We investigate characteristic features, typical artifacts, sampling conditions and some specific applications for these formulas.

Christoph Schnörr -Variational Image Analysis Using Wasserstein-Distance Based Priors

Locally computed empirical measures of features are widely used for various tasks of image analysis and computer vision. This talk focuses on more involved variational approaches that include global priors based on the Wasserstein distance between empirical measures that depend on the variable to be optimized. Convex relaxations and problem decompositions are discussed for the design of numerical algorithms. Applications include variational image restoration, variational segmentation and cosegmentation. This is a joint work with Paul Swoboda.

Daniel Cremers -Convex Relaxation Methods for Computer Vision

Numerous problems in computer vision and image analysis can be solved by variational methods and partial differential equations. Yet, many traditional approaches correspond to non-convex energies giving rise to suboptimal solutions and often strong dependency on appropriate initialization. In my presentation, I will show how problems like image segmentation, multiple view reconstruction, optical flow estimation and 3D shape matching can be tackled by means of convex relaxation methods. Subsequently, I will introduce methods of convexification which allow to efficiently compute globally optimal or near-optimal solutions. The arising convex problems can be solved by means of provably convergent primal-dual algorithms. They are easily parallelized on GPUs providing high-quality solutions in acceptable runtimes.

Patrick Perez -From image to descriptors and back again

The context of this presentation is visual search, with a specific focus on retrieving images similar to a query image. I will discuss three problems: - Aggregation of local descriptors (typically SIFTs) into a fixed-sized image signature. I shall present "Vector of Locally Aggregated Descriptors" (VLAD), an aggregation technique derived from Fisher kernel, which offers a powerful alternative to popular "Bag-of-Word" (BoF) approach. Combined with an efficient indexing system, VLAD can lead to a memory footprint as compact as 16 bytes per image with good approximate search performance. - Efficient search with complex similarity measures. When kernels not based on the Euclidean distance are more suited to compare image signatures (e.g., chi-square kernel), corresponding exact or approximate nearest neighbor search can become expensive. I shall present ways to expedite it, including one based on so-called explicit embedding. - Visual information leakage in image indexing systems. I shall show in particular that human-understandable reconstruction of an image can be obtained from the sparse local descriptors of this image and no other side information.

Olga Sorkine-Hornung -Variational warping for multi-image editing

We present a method for consistent automatic transfer of edits applied to one image to many other images of the same object or scene. By introducing novel, content-adaptive weight functions we enhance the non-rigid alignment framework of Lucas-Kanade to robustly handle changes of view point, illumination and non-rigid deformations of the subjects. Our weight functions are content-aware and possess high-order smoothness, enabling to define high-quality image warping with a low number of parameters using spatially-varying weighted combinations of affine deformations. Optimizing the warp parameters leads to subpixel-accurate alignment while maintaining computation efficiency. Our method allows users to perform precise, localized edits such as simultaneous painting on multiple images in real-time, relieving them from tedious and repetitive manual reapplication to each individual image.

Cordelia Schmid -DeepFlow: Large displacement optical flow with deep matching

Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed DeepFlow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. DeepFlow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.

René Vidal -Algebraic, Sparse and Low-Rank Subspace Clustering

In the era of data deluge, the development of methods for discovering structure in high-dimensional data is becoming increasingly important. Traditional approaches often assume that the data is sampled from a single low-dimensional manifold. However, in many applications in signal/image processing, machine learning and computer vision, data in multiple classes lie in multiple low-dimensional subspaces of a high-dimensional ambient space. In this talk, I will present methods from algebraic geometry, sparse representation theory and rank minimization for clustering and classification of data in multiple low-dimensional subspaces. I will show how these methods can be extended to handle noise, outliers as well as missing data. I will also present applications of these methods to video segmentation and face clustering.

Amit Singer -Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem

In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly-oriented copies of a molecule. The cryo-EM problem is to use the resulting set of noisy 2D projection images taken at unknown directions to reconstruct the 3D structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in cryo-EM. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously shown that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is however challenging, since only projections of the molecules are observed, but not the molecules themselves. In this talk we derive an estimator for the covariance matrix as a solution to a certain linear system. While we prove that the resulting estimator for the covariance matrix is consistent in the classical limit as the number of projection images grow indefinitely, an interesting open question regarding the sample complexity of the problem remains. Namely, how many images are required in order to resolve heterogeneous structures as a function of the volume size and the signal to noise ratio? We will see that solving this question requires us to extend the analysis of principal component analysis (PCA) in high dimensions, as we encounter limiting distributions that differ from the classical Marcenko-Pastur distribution. Joint work with G. Katsevich and A. Katsevich

Eero Simoncelli -Recovery of sparse translation-invariant signals with continuous basis pursuit

We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., a set of shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxilliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represent a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the non-auxiliary coefficients using an L1 penalty. The well-known basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translationinvariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. I will show a new set of state-of-the-art results on the problem of estimating spike arrival times from multi-electrode measurements in the brain.

Yohan Tendero -On the Foundations of Computational Photography: Theory and Practice

This work is about a revolutionary camera concept called "flutter shutter". Flutter shutter cameras promised to increase the image quality when photographing moving objects. The talk gives a mathematical theory and formulas that answer the questions on these recent camera designs. (The theory is also proved to be valid for the "motion invariant photography": the only other competitor of the flutter shutter".)

Russell Luke -Nonconvex notions of regularity and sparse affine feasibility: local and global linear convergence to a global solution of a nonconvex optimization problem.

We hope to convince our audience that convexity is not a categorical imperative for provablly convergent numerical algorithms. Firm nonexpansiveness of self-mappings is a global property that yields global convergence of fixed point algorithms. This property is intimately connected to convexity and has a long history in the literature of convex optimization. Unfortunately many applications one encounters are nonconvex. To accommodate nonconvexity, we formulate a generalization of firm nonexpansiveness that, together with a coercivity condition with respect to the set of fixed points, yields local linear convergence of generic nonmonotone fixed point algorithms. These tools enable us to prove local linear convergence of fundamental algorithms applied to the problem of finding a sparse vector subject to underdetermined affine constraints. Together with additional assumptions that are usually used to guarantee the correspondence of convex relaxations to the sparsity optimization problem, these local results are extended to show global linear convergence of a standard relaxation of the fundamental method of alternating projections applied to a nonconvex feasibility formulation of the problem.

Luca Zanni -Parameter estimation in regularization models for Poisson data

In many imaging applications the image intensity is measured by counting incident particles and, consequently, the fluctuations in the counting process can be taken into account by modeling the data as realizations of Poisson random variables. In such case, the maximum likelihood approach for image restoration leads to minimization problems in which the data-fidelity function is the generalized Kullback-Leibler (KL) divergence. Since, in general, these optimization problems are ill-conditioned, regularization approaches are necessary and the design of strategies for selecting a proper value of the regularization parameter is a crucial issue. This is still an open problem for Poisson data and, in special cases, interesting contributions have been recently provided. In this work we consider some regularization models and the theoretical and numerical issues concerning with their parameter estimations. Following the idea of the discrepancy principle, we discuss strategies that provide a parameter estimation as solution of a discrepancy equation and also regularization models in which a suited estimation is obtained by solving constrained optimization problems which force an upper bound on the discrepancy function. Furthermore, reconstruction strategies that require only an overestimation of the regularization parameter will be also presented.

Yuri Boykov -Combinatorial optimization for higher-order segmentation functionals: Entropy, Color Consistency, Curvature, etc.

This talk discusses recent advances in image segmentation in the context of higher-order appearance and smoothness functionals. The most basic segmentation energies combine unary terms enforcing certain color appearance models and pair-wise terms enforcing boundary smoothness. If color models are known, the corresponding binary optimization problem can be globally minimized. Estimation of color models leads to NP-hard mixed optimization problems that are typically solved with iterative block-coordinate descent (Zhu-Yuille, Chan-Vese, GrabCut, etc.) sensitive to initialization. This talk motivates higher-order appearance functionals (e.g. entropy and color-consistency) that do not require explicit estimation of segment appearance models. We show that in many cases such energies can be minimized globally. For example, our approach allows replacing iterative “grabcut” technique with a one cut method finding a global minimum. We also discuss a general Trust Region approach for approximate minimization of other high-order appearance terms. Time permitting, we will also motivate higher-order boundary smoothness terms (e.g. curvature) and describe the corresponding state-of-the-art combinatorial optimization techniques.

François-Xavier Vialard -New mathematical models in Computational Anatomy

One goal of computational anatomy is to develop statistical models on biomedical shapes. In this talk, we will briefly present the so-called LDDMM model and we will propose new geometrical models to overcome some of its shortcomings. In particular, we will present the use of left-invariant metrics on group of diffeomorphisms and the use of De-Rham theorem to define Riemannian metrics on space of shapes that satisfy constraints of interest.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## 3 comments:

Hi Igor,

I am having problems with the links to the slides. Many interesting talks at this conference.

Manny

Hi look at the below link to access the slides!

https://fadili.users.greyc.fr/mia/events/fgmia-14/program/

Best wishes

Calvin

Calvin,

There are still slides missing.

Igor.

Post a Comment