Stefan Schiffer just sent me the following:
The code rfss.m is here. Thanks Stefan ! I'll add it to the Big Picture shortly.
So you expected to go home for the week-end with nothing to take home. Well I have a good news/bad news. The bad news is, there is plenty to read, the good news is that some of these papers are likely important to you. Enjoy!
In what is expected to have some major impact here is :Task-Driven Dictionary Learning by Julien Mairal, Francis Bach, Jean Ponce. The abstract reads:
Dear Igor,
I just wanted to provide you another implementation for the Compressive Sensing: The Big Picture page. It implements the feature sign search algorithm presented in
Jin, Bangti, Lorenz, Dirk A., Schiffler, Stefan 2009: Elastic-Net Regularization: Error estimates and Active Set Methods. Inverse Problems 25/11, 115022 (26pp).
One must be careful if using function handles with it.
The code rfss.m is here. Thanks Stefan ! I'll add it to the Big Picture shortly.
So you expected to go home for the week-end with nothing to take home. Well I have a good news/bad news. The bad news is, there is plenty to read, the good news is that some of these papers are likely important to you. Enjoy!
In what is expected to have some major impact here is :Task-Driven Dictionary Learning by Julien Mairal, Francis Bach, Jean Ponce. The abstract reads:
Abstract: Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
High Range Resolution Profiling for Stepped-Frequency Radar with Sparse Reconstruction by Yang Hu, Yimin Liu, Huadong Meng. The abstract reads:
Abstract: The newly emerging theory of compressed sensing (CS) enables restore of a sparse signal from inadequate number of linear projections. Based on compressed sensing theory, a new strategy of high-resolution range profiling of stepped-frequency (SF) radar with missing pulses is proposed. If a certain part of received pulses are missing, we demonstrate the proposed method is capable of reconstructing target profile with high fidelity. MATLAB simulation results are presented to verify the proposed method. Furthermore, we use collected data from real SF radar to generate target high-resolution range profile (HRRP). Results are compared with traditional 'Stretch' method to prove its applicability.
Action recognition in video by sparse representation on covariance manifolds of silhouette tunnels by Kai Guo, Prakash Ishwar, and Janusz. Konrad, The abstract reads:
A novel framework for action recognition in video using empirical covariance matrices of bags of low-dimensional feature vectors is developed. The feature vectors are extracted from segments of silhouette tunnels of moving objects and coarsely capture their shapes. The matrix logarithm is used to map the segment covariance matrices, which live in a nonlinear Riemannian manifold, to the vector space of symmetric matrices. A recently developed sparse linear representation framework for dictionary-based classification is then applied to the log-covariance matrices. The log-covariance matrix of a query segment is approximated by a sparse linear combination of the log-covariance matrices of training segments and the sparse coefficients are used to determine the action label of the query segment. This approach is tested on the Weizmann and the UT-Tower human action datasets. The new approach attains a segment-level classification rate of 96.74% for the Weizmann dataset and 96.15% for the UT-Tower dataset. Additionally, the proposed method is computationally and memory efficient and easy to implement.
Action recognition using sparse representation on covariance manifolds of optical flow by Kai Guo, Prakash Ishwar, and Janusz. Konrad, The abstract reads:
The Action recognition page is here.A novel approach to action recognition in video based on the analysis of optical flow is presented. Properties of optical flow useful for action recognition are captured using only the empirical covariance matrix of a bag of features such as flow velocity, gradient, and divergence. The feature covariance matrix is a low-dimensional representation of video dynamics that belongs to a Riemannian manifold. The Riemannian manifold of covariance matrices is transformed into the vector space of symmetric matrices under the matrix logarithm mapping. The log-covariance matrix of a test action segment is approximated by a sparse linear combination of the log-covariance matrices of training action segments using a linear program and the coefficients of the sparse linear representation are used to recognize actions. This approach based on the unique blend of a logcovariance- descriptor and a sparse linear representation is tested on the Weizmann and KTH datasets. The proposed approach attains leave-one-out cross validation scores of 94.4% correct classification rate for the Weizmann dataset and 98.5% for the KTH dataset. Furthermore, the method is computationally efficient and easy to implement.
The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices by Zhouchen Lin, Minming Chen, Leqin Wu, Yi Ma. The abstract reads:
Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming by Alexandre Belloni, Victor Chernozhukov, Lie Wang. The abstract reads:This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its entries being arbitrarily corrupted. This problem arises in many applications, such as image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, the Robust PCA problem can be exactly solved via convex optimization that minimizes a combination of the nuclear norm and the $\ell^1$-norm . In this paper, we apply the method of augmented Lagrange multipliers (ALM) to solve this convex program. As the objective function is non-smooth, we show how to extend the classical analysis of ALM to such new objective functions and prove the optimality of the proposed algorithms and characterize their convergence rate. Empirically, the proposed new algorithms can be more than five times faster than the previous state-of-the-art algorithms for Robust PCA, such as the accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms achieve higher precision, yet being less storage/memory demanding. We also show that the ALM technique can be used to solve the (related but somewhat simpler) matrix completion problem and obtain rather promising results too. Matlab code of all algorithms discussed are available at this http URL
We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors $p$ is large, possibly much larger than $n$, but only $s$ regressors are significant. The method is a modification of LASSO, called square-root LASSO. The method neither relies on the knowledge of the standard deviation $\sigma$ of the regression errors nor does it need to pre-estimate $\sigma$. Despite not knowing $\sigma$, square-root LASSO achieves near-oracle performance, attaining the convergence rate $\sigma \sqrt{(s/n)\log p}$, and thus matching the performance of the standard LASSO that knows $\sigma$. Moreover, we show that these results are valid for both Gaussian and non-Gaussian errors, under some mild moment restrictions, using moderate deviation theory. Finally, we formulate the square-root LASSO as a solution to a convex conic programming problem, which allows us to use efficient computational methods, such as interior point methods, to implement the estimator.
Morphological dilation image coding with context weights prediction by Jiaji Wu, Yan Xing, Anand Paul, Yong Fang, Jechang Jeong, Licheng Jiao, Guangming Shi. The abstract reads:
This paper proposes an adaptive morphological dilation image coding with context weights prediction. The new dilation method is not to use fixed models, but to decide whether a coefficient needs to be dilated or not according to the coefficient's predicted significance degree. It includes two key dilation technologies: 1) controlling dilation process with context weights to reduce the output of insignificant coefficients, and 2) using variable-length group test coding with context weights to adjust the coding order and cost as few bits as possible to present the events with large probability. Moreover, we also propose a novel context weight strategy to predict coefficient's significance degree more accurately, which serves for two dilation technologies. Experimental results show that our proposed method outperforms the state of the art image coding algorithms available today.
If option pricing models have sparse representations, what is incoherent with these functions ? well I am thinking too much ahead. Instead let us look at the sparse representation thingy: A reduced basis for option pricing by Rama Cont, Nicolas Lantos, Olivier Pironneau. The abstract reads:
We introduce a reduced basis method for the efficient numerical solution of partial integro-differential equations which arise in option pricing theory. Our method uses a basis of functions constructed from a sequence of Black-Scholes solutions with different volatilities. We show that this choice of basis leads to a sparse representation of option pricing functions, yielding an approximation whose precision is exponential in the number of basis functions. A Galerkin method using this basis for solving the pricing PDE is presented. Numerical tests based on the CEV diffusion model and the Merton jump diffusion model show that the method has better numerical performance relative to commonly used finite-difference and finite-element methods. We also compare our method with a numerical Proper Orthogonal Decomposition (POD). Finally, we show that this approach may be used advantageously for the calibration of local volatility functions.
Compressed sensing for chemical shift-based water–fat separation by Mariya Doneva, Peter Börnert, Holger Eggers, Alfred Mertins, John Pauly, Michael Lustig. The abstract reads:
Multi echo chemical shift-based water–fat separation methods allow for uniform fat suppression in the presence of main field inhomogeneities. However, these methods require additional scan time for chemical shift encoding. This work presents a method for water–fat separation from undersampled data (CS-WF), which combines compressed sensing and chemical shift-based water–fat separation. Undersampling was applied in the k-space and in the chemical shift encoding dimension to reduce the total scanning time. The method can reconstruct high quality water and fat images in 2D and 3D applications from undersampled data. As an extension, multipeak fat spectral models were incorporated into the CS-WF reconstruction to improve the water–fat separation quality. In 3D MRI, reduction factors of above three can be achieved, thus fully compensating the additional time needed in three-echo water–fat imaging. The method is demonstrated on knee and abdominal in vivo data.
A novel memory-efficient fast algorithm for 2-D compressed sensing by Huiyuan Wang Vieira, Jose Jesus, Brun. The abstract reads:
The basic theories of compressed sensing (CS) turn around the sampling and reconstruction of 1-D signals. To deal with 2-D signals (images), the conventional treatment is to convert them into1-D vectors. This has drawbacks, including huge memory demands and difficulties in the design and calibration of the optical imaging systems. As a result, in 2009 some researchers proposed the concept of compressed imaging (CI) with separable sensing operators. However, their work is only focused on the sampling phase. In this paper, we propose a scheme for 2-D CS that is memory- and computation-efficient in both sampling and reconstruction. This is achieved by decomposing the 2-D CS problem into two stages with the help of an intermediate image. The intermediate image is then solved by direct orthogonal linear transform and the original image is reconstructed by solving a set of 1-D l1-norm minimization sub-problems. The experimental results confirm the feasibility of the proposed scheme.
Three dimension double inversion recovery gray matter imaging using compressed sensing by Gho SM, Nam Y, Zho SY, Kim EY, Kim DH. The abstract reads:
The double inversion recovery (DIR) imaging technique has various applications such as black blood magnetic resonance imaging and gray/white matter imaging. Recent clinical studies show the promise of DIR for high resolution three dimensional (3D) gray matter imaging. One drawback in this case however is the long data acquisition time needed to obtain the fully sampled 3D spatial frequency domain (k-space) data. In this paper, we propose a method to solve this problem using the compressed sensing (CS) algorithm with contourlet transform. The contourlet transform is an effective sparsifying transform especially for images with smooth contours. Therefore, we applied this algorithm to undersampled DIR images and compared with a CS algorithm using wavelet transform by evaluating the reconstruction performance of each algorithm for undersampled k-space data. The results show that the proposed CS algorithm achieves a more accurate reconstruction in terms of the mean structural similarity index and root mean square error than the CS algorithm using wavelet transform.
Credit: NASA
No comments:
Post a Comment