In version 1.6, the core SPGL1 function was generalized to solve the above three problems with
1 replaced by any norm. In order to do so, it requires that functions are provided for computing: (1) the (primal) norm , (2) the corresponding dual norm , and (3), the (possibly weighted) Euclidean projection:
project(x):=argminp−x2subject top Using this framework we implemented two new solvers (for more information, see
spgdemo.m
and Extensions):Multiple measurement vectors (MMV)
The MMV version of BPDN currently implemented solves
(MMV)minimizeX12subject toAX−B22 ,where the mixed
(pq) -norm,Xpq , (pq1 ), defined by
Xpq:=imXiTpq1p ,with
X anmn matrix and whereXi denotes thei -th row ofX . When weights are given, they apply to each row ofX .Group-sparse BPDN
In group-sparse BPDN each entry of
x is assigned a group. Denoting byi the indices of groupi , the group-sparse BPDN problem, for0 , is given by:
(Group)minimizeixi2subject toAx−b2 .It is assumed that the
i are disjoint and that their union gives the set1n . When weights are given, they apply to each group.
For more information you may want to check this page. This is a nice addition as we have seen the use of mixed norm popping up several times before and particularly in:
- Robust recovery of signals from a union of subspaces by Yonina Eldar and Moshe Mishali,
- Under-determined source separation via mixed-norm regularized minimization by M. Kowalski, E. Vincent and Remi Gribonval and,
- Sparsity-Enforced Slice-Selective MRI RF Excitation Pulse Design by Adam Zelinski, Lawrence L. Wald, Kawin Setsompop, Vivek Goyal and Elfar Adalsteinsson
Michael Friedlander, one of the author of SPGL1, further pointed out the following when shown the SPGL1 and TV minimization post by Laurent Jacques:
A while ago I had wondered about extending SPGL1 to other norms, including TV. But I couldn't think of a good way to do projections onto the "TV-norm ball". (Projection onto the one-norm ball is a crucial part of the SPGL1 algorithm.)...The solver now handles "group" one-norms. These are useful when subsets of coefficients need to light up or shut off as a group. Complex-valued problems are a special case, where we expect that the real and complex components should be jointly sparse.
SPARCO is still at version 1.2 but an overview of the Sparco toolbox is given in this technical report.
Way to go Michael and Ewout!
The latest news for CVX ( Matlab Software for Disciplined Convex Programming ) by Michael Grant, Stephen Boyd and Yinyu Ye, are:
Fuller support for 64-bit platforms
CVX version 1.2 now supports 64-bit versions of Matlab on Windows, Linux, and Macintosh platforms; precompiled MEX files are supplied for each of these platforms. Previously, only SDPT3 was supported on 64-bit platforms; now SeDuMi is as well. However, due to the evolution of Matlab's MEX API, Matlab version 7.3 or later is required for 64-bit platforms; and SeDuMi requires version 7.5 or later. (Earlier versions of Matlab are still supported for 32-bit platforms.)
Support for the exponential family of functions
On an unrelated news:
- there is an updated version of Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization by Benjamin Recht, Maryam Fazel, and Pablo Parrilo.
- Mikkel Schmidt has a talk on Bayesian non-negative matrix factorization (NMF) using a Gibbs sampling approach.
- Jian-Feng Cai, Stanley Osher, and Zuowei Shen, Linearized Bregman Iterations For Frame-Based Image Deblurring
- and there is a new paper entitled 3D reconstruction from a single image by Diego Rother and Guillermo Sapiro. I'll come back to that latter one later.
Hi Igor,
ReplyDeleteYou may like following videos in ICML'08 about Matrix Factorization:
Bayesian MF:
http://videolectures.net/icml08_salakhutdinov_bpm/
New approach to solve NMF:
http://videolectures.net/icml08_ghodsi_nmf/
A lot of reading after this post :-)
Regards,
Kayhan
Hi Kayan,
ReplyDeleteThanks for the links.
Cheers,
Igor.