Pages

Tuesday, September 01, 2015

Dictionary Learning for Blind One Bit Compressed Sensing

So this is how the Great Convergence occur: step-by-step, this article could have been called Sparse Coding for one bit signals if it were to be written for a matrix factorization crowd or supervised learning of a shallow network for sparse binary signals if it were to be sold to the Machine Learning community. In the end, it doesn't matter:   


This letter proposes a dictionary learning algorithm for blind one bit compressed sensing. In the blind one bit compressed sensing framework, the original signal to be reconstructed from one bit linear random measurements is sparse in an unknown domain. In this context, the multiplication of measurement matrix $\Ab$ and sparse domain matrix Φ, \ie $\Db=\Ab\Phi$, should be learned. Hence, we use dictionary learning to train this matrix. Towards that end, an appropriate continuous convex cost function is suggested for one bit compressed sensing and a simple steepest-descent method is exploited to learn the rows of the matrix $\Db$. Experimental results show the effectiveness of the proposed algorithm against the case of no dictionary learning, specially with increasing the number of training signals and the number of sign measurements.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment