Pages

Friday, June 28, 2013

Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization

What if techniques of compressive sensing could be used to help techniques of compressive sensing (and many others) ? this is the subject of the following paper that looks at finding a sparse Hessian for derivative-free optimization computations. I also wonder how this could be applied to learning Kernels.

Interpolation-based trust-region methods are an important class of algorithms for Derivative-Free Optimization which rely on locally approximating an objective function by quadratic polynomial interpolation models, frequently built from less points than there are basis components. Often, in practical applications, the contribution of the problem variables to the objective function is such that many pairwise correlations between variables are negligible, implying, in the smooth case, a sparse structure in the Hessian matrix. To be able to exploit Hessian sparsity, existing optimization approaches require the knowledge of the sparsity structure. The goal of this paper is to develop and analyze a method where the sparse models are constructed automatically. The sparse recovery theory developed recently in the field of compressed sensing characterizes conditions under which a sparse vector can be accurately recovered from few random measurements. Such a recovery is achieved by minimizing the l1-norm of a vector subject to the measurements constraints. We suggest an approach for building sparse quadratic polynomial interpolation models by minimizing the l1-norm of the entries of the model Hessian subject to the interpolation conditions. We show that this procedure recovers accurate models when the function Hessian is sparse, using relatively few randomly selected sample points. Motivated by this result, we developed a practical interpolation-based trust-region method using deterministic sample sets and minimum l1-norm quadratic models. Our computational results show that the new approach exhibits a promising numerical performance both in the general case and in the sparse one.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

4 comments:

  1. I am not sufficiently familiar with the optimisation jargon, so I am going to have to ask a stupid question: if the optimisation is "derivative-free", why is the Hessian relevant?

    ReplyDelete
  2. Thomas, the optimization is derivative-free in the sense that the derivative of the objective function is unknown. The approach in the paper involves approximating the objective function and using the derivative information of the approximation.

    ReplyDelete
  3. Ah, OK. I understood it as not having to compute a derivative.

    ReplyDelete