Summer generally allows for some downtime at which point I can update other pages such as the Reproducible Research ( implementations ) page, the Big Picture in Compressive Sensing or the Advanced Matrix Factorization Jungle page. Here are the implementations I will add shortly to these pages. They include all implementations listed here on the Nuit Blanche in Review starting January 2014 till yesterday's entry:
Compressive Sensing
- Adaptive-Rate Compressive Sensing Using Side Information - implementation -
- NLCS : Non-Local Compressive Sampling Recovery - implementation -
- Multidimensional Compressed Sensing and their Applications - implementation -
- The SwAMP Thing! Sparse Estimation with the Swept Approximated Message-Passing Algorithm -implementation -
- On the Convergence of Approximate Message Passing with Arbitrary Matrices
- NLR-CS : Compressive Sensing via Nonlocal Low-rank Regularization - implementation -
- Matrix-free Interior Point Method for Compressed Sensing Problems / A Second order Method for Compressed Sensing with Coherent and Redundant Dictionaries - implementation -
- emd_flow : The Constrained Earth Mover Distance Model,with Applications to Compressive Sensing
- Tree_approx : A Fast Approximation Algorithm for Tree-Sparse Recovery
- CGIHT, ASD and ScaledASD: Compressed Sensing and Matrix Completion
- GAGA: GPU Accelerate Greedy Algorithms for Compressed Sensing - new version -
- RCoS : Image Compressive Sensing Recovery via Collaborative Sparsity
- GrAMPA: Generalized Approximate Message Passing for Analysis Compressive Sensing
- SparseFHT: A Fast Hadamard Transform for Signals with Sub-linear Sparsity in the Transform Domain - implementation -
- EMaC : Robust Spectral Compressed Sensing via Structured Matrix Completion
- Summary of Contribution: Compressed sensing with linear correlation between signal and measurement noise
- STSBL-EM : Spatiotemporal Sparse Bayesian Learning with Applications to Compressed Sensing of Multichannel Physiological Signals
- Sparse Spikes Deconvolution with Continuous Basis-Pursuit
- Bayesian Pursuit Algorithms
- Image Compressive Sensing Recovery Using Adaptively Learned Sparsifying Basis via L0 Minimization
Regression
Optimization solvers
Compressive Detection
Sparse Approximation/Reconstruction/Denoising
Blind Deconvolution
Matrix Factorization (other than NMF)
- GreBsmo: Greedy Bilateral Sketch, Completion & Smoothing - implementation -
- Tight convex relaxations for sparse matrix factorization - implementation -
- Fast matrix completion without the condition number - implementation -
- Dynamic MR image reconstruction–separation from undersampled (k,t)-space via low-rank plus sparse prior - implementation -
- TGA: Grassmann Averages for Scalable Robust PCA
- Spectral redemption: clustering sparse networks
- Sparse Randomized Kaczmarz for Multiple Measurement Vectors
- Fast and Robust Archetypal Analysis for Representation Learning
- Probabilistic Archetypal Analysis
- ROSL : Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices - implementation -
- CGIHT, ASD and ScaledASD: Compressed Sensing and Matrix Completion
- PSPG: Efficient Algorithms for Robust and Stable Principal Component Pursuit Problems
- CPCP: Scalable Robust Matrix Recovery: Frank-Wolfe Meets Proximal Methods
- ADMIP: An Alternating Direction Method with Increasing Penalty for Stable Principal Component Pursuit
- OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage
- Scalable methods for nonnegative matrix factorizations of near-separable tall-and-skinny matrices
- Introduction into cross approximation / Fast multidimensional convolution in low-rank formats via cross approximation - implementation -
- LRRSC : Subspace Clustering by Exploiting a Low-Rank Representation with a Symmetric Constraint - implementation -
- WNNM: Weighted Nuclear Norm Minimization with Application to Image Denoising
- Sparse Coding and Dictionary Learning for Symmetric Positive Definite Matrices: A Kernel Approach
- Low-Rank Modeling of Local k-Space Neighborhoods (LORAKS): Implementation and Examples for Reproducible Research
- IRNN: Generalized Nonconvex Nonsmooth Low-Rank Minimization
- HASI: Probabilistic Low Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
- Interest Zone Matrix Approximation
NMF
- Random Projections for Non-negative Matrix Factorization
- Enhancing Pure-Pixel Identification Performance via Preconditioning - implementation -
Nonlinear Compressive Sensing
One-Bit
Phase Retrieval
Randomized Numerical Linear Algebra
Tensor
Machine Learning
- Mondrian Forests: Efficient Online Random Forests - implementation -
- Understanding Random Forests: From Theory to Practice - implementation -
- SPAL: Sparse Projection-based Adaptive Learning
- Active Subspace: Towards Scalable Low-Rank Learning
- A Riemannian approach to low-rank algebraic Riccati equations - implementation -
- KNIFE: Automatic Feature Selection via Weighted Kernels and Regularization
- Avoiding pathologies in very deep networks
- Kernel LMS algorithm with forward-backward splitting for dictionary learning
- MISO: Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Subspace Learning
- LSR: Robust and Efficient Subspace Segmentation via Least Squares Regression
- PETRELS: Parallel Subspace Estimation and Tracking by Recursive Least Squares from Partial Observations
Applications
- Randomized Comments: Code for GWAS and CS study, a new blog, Geometric theory of information
- PURIFY: a new algorithmic framework for next-generation radio-interferometric imaging - implementation -
- REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time
- HODLR and george: Fast Direct Methods for Gaussian Processes and the Analysis of NASA Kepler Mission Data - implementation -
- CaSPIAN: A Causal Compressive Sensing Algorithm for Discovering Directed Interactions in Gene Networks - implementation -
- Saturday Morning Video: SVO: Fast Semi-Direct Monocular Visual Odometry - implementation -
- SUGAR : Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection
- Selecting thresholding and shrinking parameters with generalized SURE for low rank matrix estimation
- The Flutter Shutter Code Calculator - implementation -
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment