Pages

Tuesday, December 31, 2013

Nuit Blanche In Review (December 2013)




Happy New Year for those of you already in 2014!


What a year! Nuit Blanche Monthly Reviews produced this past year can be found in the next 12 entries under the NuitBlancheReview tag.

Those entries enable me to update every month or two the list of available implementations that have been made available by their authors. Hence, the following reference pages will be updated shortly:
Before we embark on the listing of this month's entries, here are some entries that for one reason or another stuck with me:
Here is what we had this past month:

Implementations:
Focused entries:

Kickstarter

Blogging

Other:
Other videos

Monday, December 30, 2013

Do Deep Nets Really Need to be Deep?

There are several ways of learning the Identity (through compressive sensing in the semilinear model and algorithms of Machine Learning for nonlinear models, see [5]) with deep or not so deep neural networks and there is currently much schizophrenia in trying to figure out what is a good model. On the one hand, you want to have shallow models such as k-Sparse Autoencoders and deep down one of the reasons for this is the ability to potentially have Provable Algorithms for Machine Learning Problems in our lifetimes. On the other, deep networks seem to provide more accuracy in benchmarks.

One can do this comparison between shallow and deep netowrks several ways: through the comparison of results from shallow and deep networks with good databases / benchmarks, through the acid test of the sharp phase transitions ( see [1,2,3,4,5,6]), or maybe by seeing how approximating a deep network with a shallow one reduces its precision: the idea being that approximating a deeper network with a shallower will provide an idea of the legitimacy for investng much time with deeper networks ... or not. This is precisely what the next paper is doing:


Do Deep Nets Really Need to be Deep? by Lei Jimmy Ba, Rich Caruana

Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.
from the beginning of the paper:

You are given a training set with 10M labeled points. When you train a shallow neural net with one fully-connected feedforward hidden layer on this data you obtain 85% accuracy on test data. When you train a deeper neural net as in [2] consisting of convolutional layers, pooling layers, and multiple fully-connected feedforward layers on the same data you obtain 90% accuracy on the test set. What is the source of this magic? Is the 5% increase in accuracy of the deep net over the shallow net because: a) the deep net has more parameters than the shallow net; b) the deep net is deeper than the shallow net; c) nets without convolution can’t learn what nets with convolution can learn; d) current learning algorithms and regularization procedures work better with deep architectures than with shallow architectures; e) all or some of the above; f) none of the above?
from Rich Caruana's page
We're doing new work on what we call Model Compression where we take a large, slow, but accurate model and compress it into a much smaller, faster, yet still accurate model. This allows us to separate the models used for learning from the models used to deliver the learned function so that we can train large, complex models such as ensembles, but later make them small enough to fit on a PDA, hearing aid, or satellite. With model compression we can make models 1000 times smaller and faster with little or no loss in accuracy. Here's our first paper on model compression.
Personally, I think sharp phase transitions will eventually be the great equalizers.

References
  1. Sunday Morning Insight: Randomization is not a dirty word
  2. Sunday Morning Insight: Sharp Phase Transitions in Machine Learning ?
  3. Sunday Morning Insight: Exploring Further the Limits of Admissibility
  4. Sunday Morning Insight: The Map Makers
  5. Quick Panorama of Sensing from Direct Imaging to Machine Learning 
  6. Faster Than a Blink of an Eye.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Video (in French): Machine Learning Meetup #6: Jouer avec Kaggle / Detection de Botnets

The video for the Paris Machine Learning Meetup #6 has been out for a little while now but I just noticed I did not listed here before (all the archives are here) The video (in French) is at:



This 6th edition was about Kaggle and Botnet detection with Neural Networks (Jouer avec Kaggle / Detection de Botnets. The meetup took place on December 11th, 2013 at DojoEvents, a great venue to host meetups. Here are the slides:


A summary is at Paris Machine Learning Meetup #6 Summary and a follow up thought was written in a full blown  Sunday Morning Insight:  Randomization is not a dirty word

One of the speakers was covered by Mathilde Damgé at Le Monde magazine in Kaggle, le site qui transforme le « big data » en or.

 Next Meetup is January 15th, see you there !


Sunday, December 29, 2013

16 months of Sunday Morning Insight entries

So it's been now 16 months since the Sunday Morning Insight Series saw its first entry. Here is a compendium of what came out during that time period. The first batch is mostly about compressive sensing, sensors, phase transitions and their eventual connection to machine learning.  In all cases, it looks as though sharp phase transitions between P and NP are bound to be the acid tests for linear and nonlinear models as used in machine learning and sensing. Central to this view is the fact that generic sensing is now increasingly part of a larger spectrum of ideas on how to capture data from Nature and the world around us. Let us hope that it will produce a pooling of efforts similar to the one we have seen since 2004 in the development of sparsity seeking solvers.   The second batch is about new fields that could benefit from the reevaluation afforded by compressive sensing and attendant techniques. Enjoy!





W00085666.jpg was taken on December 26, 2013 and received on Earth December 27, 2013. The camera was pointing toward SATURN at approximately 1,281,012 miles (2,061,589 kilometers) away, and the image was taken using the MT2 and CL2 filters. 
Image Credit: NASA/JPL/Space Science Institute
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 27, 2013

Nuit Blanche: The Way You See It.



Every once in while, I have to explain what Nuit Blanche is. I often trip on what the goals of the blog are and eventually always mentioned that it really is targeted toward specialists. I don't think it's a good way to introduce the subject :-) Here is what some of you had to say about it following the recent Favor to Ask entry.

Patrick Gill
Senior Research Scientist at Rambus
Igor's Nuit Blanche blog is a fantastic resource for the entire compressed sensing community, both for its comprehensiveness and for its cogent summaries of cutting-edge work.

Meena Mani
Brain image analysis: data analysis and neuroscience
Igor's blog is a great resource/bulletin board for all that is going on in the world of compressed sensing and related areas. I have attended workshops that I wouldn't ordinarily have after learning about them on Nuitblanche. I am impressed not only by Igor's energy and enthusiasm but also his generosity and interest in encouraging graduate students and young researchers.

Danny Bickson
Co-Founder at GraphLab
Let me make it short: Igor is found everywhere. Igor knows everything. With unlimited energy he follows research in multiple disciplines and got to a level of a science guru. One million page reads of science a year says it all. Igor sets a new bar we should all strive to.

Thomas Arildsen
Assistant Professor at Aalborg University
Igor provides a great service to the scientific community with his Nuit Blanche blog which is a must-follow for anyone trying to keep track of compressed sensing.

Thong Do
Co-Founder at ADATAO, Inc.
Igor has a great vision to build the Nuit Blanche blog for the compressed sensing research community. This blog is very helpful for grad students, researchers and other professionals who are interested in compressed sensing and its development. When I was a grad student, I read the blog daily to get myself always updated with latest advances in the field. The blog gave me a fast and very efficient way to acquire state-of-the-art knowledge of the field without spending a lot of time to search around. The blog contributes partially to the rapid growth of the compressed sensing field as it helps research works/papers become more visible to other researchers in a timely fashion. Igor also has many great ideas and actions to build a stronger research community around his blog. He often shares his own discussion with other researchers about some specific topics or about their latest published works/papers publicly on the blog. Those discussions bring out many valuable insights to other researchers in the field. In short, I strongly believe that the blog and Igor's approach to build a research community around it are disruptive innovations that fundamentally changes the way people do research and publish their research works.
Elan Pavlov
at Deductomix
Igor provides an invaluable resource to the compressed sensing community. Igor acts as an informal clearing house for new results in the field and interesting and novel hypothesis.
Jean-Luc Bouchot
Visiting assistant professor @ Drexel University
Igor administrates blogs, websites, and groups revolving around compressive sensing and matrix factorization. His ressources are of very high quality and serve as reference for anybody (beginners to advanced/expert researchers)

I always get only useful information out this.

Paul Shearer
Research Fellow at University of Michigan
Nuit Blanche is a must-read for anyone interested in the latest exciting ideas in signal processing and machine learning. Igor does a great job of sifting the wheat from the chaff, delivering the best of the day, week, and month to researchers around the world. 
Igor is a strong advocate and facilitator of the best practices in modern science. He encourages researchers to make their results reproducible, open access, and open source, and facilitates open discussion of preprint articles to assess and improve their quality. He collects the latest algorithms and software into useful repositories such as the Matrix Factorization Jungle, which is the largest collection of links to matrix factorization implementations online.
These are the sorts of things that the whole research community should be doing, but relatively few actually do. I don't know of anyone else who does more to make these good things happen, and he does it for free, out of the love of science. That is something very special, and I just hope he keeps it up.

Laurent Jacques
Professor and F.R.S.-FNRS Research Associate at Université catholique de Louvain

Since I started to work in compressed sensing and inverse problem solving, I know and use frequently the blog Nuit Blanche developed and maintained by Igor Carron. Reading it daily is a permanent source of inspiration and help every researcher to stay informed with the most recent developments in the community. I have to say that this is probably the only website that covers all the aspect of these two topics, from the most advanced theories to recent applications and technologies. Igor's skills in detecting and explaining intuitively difficult concepts extend also to other fields like matrix completion, computational biology, astronomy or computational photography.

Emmanuel Candes and Terry Tao also wrote about Nuit Blanche in the Dec. '08 issue of the IEEE Information Theory Society Newsletter with these words:

 For instance, the blog Nuit Blanche [1], edited by Igor Carron, has at least one entry per day, each introducing two or three—sometimes even more— papers in the field. This blog is widely read and Igor reports a few thousand daily hits. In the academic community, the level of excitement is also clearly perceptible in several special issues that have been dedicated to this topic, see [2] for example.
Thank you PatrickMeenaDannyThomasThongElanJean-LucLaurent, Emmanuel and Terry for the good words. If you feel like you want to say something similar about Nuit Blanche, you can add a recommendation on my LinkedIn profile.

Credit:NASA/ESA, SOHO

Thursday, December 26, 2013

Provable Algorithms for Machine Learning Problems


Very interesting thesis at the crossroad between matrix factorization and machine learning: Provable Algorithms for Machine Learning Problems by Rong Ge
Modern machine learning algorithms can extract useful information from text, images and videos. All these applications involve solving NP-hard problems in average case using heuristics. What properties of the input allow it to be solved e ciently? Theoretically analyzing the heuristics is very challenging. Few results were known. This thesis takes a di erent approach: we identify natural properties of the input, then design new algorithms that provably works assuming the input has these properties. We are able to give new, provable and sometimes practical algorithms for learning tasks related to text corpus, images and social networks.
The fi rst part of the thesis presents new algorithms for learning thematic structure in documents. We show under a reasonable assumption, it is possible to provably learn many topic models, including the famous Latent Dirichlet Allocation. Our algorithm is the first provable algorithms for topic modeling. An implementation runs 50 times faster than latest MCMC implementation and produces comparable results. The second part of the thesis provides ideas for provably learning deep, sparse representations. We start with sparse linear representations, and give the fi rst algorithm for dictionary learning problem with provable guarantees. Then we apply similar ideas to deep learning: under reasonable assumptions our algorithms can learn a deep network built by denoising autoencoders.
The fi nal part of the thesis develops a framework for learning latent variable models. We demonstrate how various latent variable models can be reduced to orthogonal tensor decomposition, and then be solved using tensor power method. We give a tight sample complexity analysis for tensor power method, which reduces the number of sample required for learning many latent variable models.
In theory, the assumptions in this thesis help us understand why intractable problems in machine learning can often be solved; in practice, the results suggest inherently new iii approaches for machine learning. We hope the assumptions and algorithms inspire new research problems and learning algorithms.

Other related posts: 







Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 25, 2013

Toeplitz Matrices and Compressive Sensing

Last week, at the last GdR meeting "MIA et Ondes" organized by Alexandre Baussard and Jalal Fadili, I had a brief conversation with Frederic Barbaresco and Antoine Liutkus, While discussing, it became apparent that some of the things Frederic does in radar signal processing might be very relevant to a set of issues we have seen in compressive sensing and related fields. In particular, their sensors put out covariance matrices that happen to have a Toeplitz structure. The literature on the more general model of covariance matrices will be done later but in order to see of the interesting mathematics developed there would be a further interest, I went back to the search feature of this blog and started looking where we had instances of Toeplitz matrices even though in some cases, those are not positive definite.

First a definition, quite simply a Toeplitz matrix is "a matrix with the same entries along all its diagonals". Hence, because these matrices represents  discretized version of convolution , it pops up in many instance of sensing or in our case, in many instances measurement matrices. For short, any time there is tomography, a Toeplitz matrix should be in the surroundings.  They can also characterizes a discrete linear time invariant (LTI) systems. 

From [1]
Signal processing theory such as prediction, estimation, detection, classification, regression, and communcations and information theory are most thoroughly developed under the assumption that the mean is constant and that the covariance is Toeplitz, i.e., KX(k, j) = KX(k − j), in which case the process is said to be weakly stationary. (The terms “covariance stationary” and “second order stationary” also are used when the covariance is assumed to be Toeplitz.) In this case the n × n covariance matrices Kn = [KX(k, j); k, j = 0, 1, . . . , n − 1] are Toeplitz matrices. Much of the theory of weakly stationary processes involves applications of Toeplitz matrices. Toeplitz matrices also arise in solutions to differential and integral equations, spline functions, and problems and methods in physics, mathematics, statistics, and signal processing.
From [2] 

Toeplitz matrices appear naturally in a variety of problems in engineering. Since positive semi-definite Toeplitz matrices can be viewed as shift-invariant autocorrelation matrices, considerable attention has been paid to them, especially in the areas of stochastic filtering and digital signal processing applications [12] and [21]. Several problems in digital signal processing and control theory require the computation of a positive definite Toeplitz matrix that closely approximates a given matrix. For example, because of rounding or truncation errors incurred while evaluating F, F does not satisfy one or all conditions. Another example in the power spectral estimation of a wide-sense stationary process from a finite number of data, the matrix F formed from the estimated autocorrelation coefficients, is often not a positive definite Toeplitz matrix [18]. In control theory, the Gramian assignment problem for discrete-time single input system requires the computation of a positive definite Toeplitz matrix which also satisfies certain inequality constraints [16].
As one can see from the references below, most of the community's concern has been focused on its use for the measurement matrices and how admissibility condition could be derived (Restricted Isometry property). The Toeplitz model is also behind the streaming measurement model and attendant reconstruction. The generic problem of tomography is a convolution of the real world with probes and we are looking for understanding those measurement matrices in essence. In particular coded aperture falls in that toeplitz/convolution model.

From slides of [40]

From [4]


In our context, the design of a metric that is specific to the manifold of positive definite or positive definite Toeplitz matrices could have an impact on everything that let us design and know better the measurement matrix used in actual hardware i.e. blind deconvolution or calibration. It could also have an impact detection in systems other than radar such as the recent compressive hardware such as the lightfield and time of flights cameras or even random system systems.... to be continued.

Merry Christmas y'all.

 PS: As a passing note, I noted in 2007 thaT Nick Trefethen HAD pointed out that both some Random matrices and Toeplitz matrices have large pseudo-spectra:

The eigenvalues of finite banded Toeplitz matrices lie on curves in the complex plane that have been characterized by Schmidt and Spitzer [SS60]. In contrast, the spectrum of the corresponding infinite dimensional Toeplitz operator is the set of points in the complex plane that a(T) encloses with non-zero winding number; see Theorem 1.17 of [BS99]. The limit of the spectrum of banded Toeplitz matrices as the dimension goes to infinity is thus typically very different from the spectrum of the limit.
This uncomfortable situation can be resolved by considering the behavior of pseudospectra. Though the eigenvalues of the finite Toeplitz matrices may fall on curves in the complex plane, Landau [Lan75], Reichel and Trefethen [RT92b], and Böttcher [Böt94] have observed that the resolvent norm ||(zI-TN)-1|| grows exponentially in the matrix dimension for all z in the interior of the spectrum of the corresponding infinite dimensional operator. (As a result, it is difficult to accurately compute eigenvalues of non-symmetric banded Toeplitz matrices even for matrices of relatively modest dimensions, and this signals a warning against basing analysis of finite Toeplitz matrices based on eigenvalues alone.) Furthermore the above cited authors also concluded that the pseudospectra of TN converge to the pseudospectra of T.

References:
  1. Toeplitz and Circulant Matrices: A reviewRobert M. Gray
  2. Toeplitz Matrix Approximation by Suliman Al-Homidan
  3. Coding and sampling for compressive tomography, David J. Brady
  4. Random Convolution and l_1Filtering, Justin Romberg
  5. Architectures for Compressive Sampling Justin Romberg
  6. Compression Limits for Random Vectors with Linearly Parameterized Second-Order Statistics by Daniel Romero, Roberto Lopez-Valcarce, Geert Leus
  7. Joint Sparsity Recovery for Spectral Compressed Sensing by Yuejie Chi
  8. Covariance Estimation in High Dimensions via Kronecker Product Expansions by Theodoros Tsiligkaridis, Alfred O. Hero III
  9. SENSING BY RANDOM CONVOLUTION Justin Romberg
  10. COMPRESSIVE SENSING BY RANDOM CONVOLUTIONJustin Romberg
  11. Practical Compressive Sensing with Toeplitz and Circulant Matrices Wotao Yin, Simon Morgan, Junfeng Yang, Yin Zhang
  12. Random Filters for Compressive Sampling and Reconstruction .J. Tropp, M. Wakin, M. Duarte, D. Baron, and R. Baraniuk
  13. Compressed Sensing of Analog Signals in Shift-Invariant Spaces, Yonina C. Eldar
  14. Compressive Sensing for Streaming Signals using the Streaming Greedy Pursuit, Petros T. Boufounos, M. Salman Asif
  15. Compressive System Identification (CSI): Theory and Applications of Exploiting Sparsity in the Analysis of High-Dimensional Dynamical Systems
  16.  Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications (See also the companion technical report).
  17. Compressive System Identification (CSI): Theory and Applications of Exploiting Sparsity in the Analysis of High-Dimensional Dynamical Systems. Borhan Sanandaji 
  18. Toeplitz Matrix Based Sparse Error Correction in System Identification: Outliers and Random Noises by Weiyu Xu, Er-Wei Bai, Myung Cho
  19. The Circulant Rational Covariance Extension Problem: The Complete Solution  by Anders Lindquist, Giorgio Picci
  20. Novel Toeplitz Sensing Matrices for Compressive Radar Imaging  Lu Gan , Kezhi Li , Cong Ling 
  21. Calibration and Blind Compressive Sensing
  22. Looking through walls and around corners with incoherent light: Wide-field real-time imaging through scattering media [updated]
  23. LEARNING A CIRCULANT SENSING MATRIX by YANGYANG XU, WOTAO YIN, SUSAN CHEN, AND STANLEY OSHER
  24. Concentration of Measure Inequalities for Toeplitz Matrices with Applications by Borhan M. Sanandaji, Tyrone L. Vincent, and Michael B. Wakin. The abstract reads:
  25. On the Relation between Block Diagonal Matrices and Compressive Toeplitz Matrices by Han Lun Yap and Christopher J. Rozell.
  26. Compressive Topology Identification of Interconnected Dynamic Systems via Clustered Orthogonal Matching Pursuit by Borhan Sanandaji, Tyrone Vincent, and Michael Wakin
  27. Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations by Borhan Sanandaji, Tyrone Vincent, and Michael Wakin
  28. Orthogonal symmetric Toeplitz matrices for compressed sensing: Statistical isometry property by Kezhi Li, Lu Gan, Cong Ling.
  29. Circulant and Toeplitz Matrices in Compressed Sensing by Holger Rauhut.
  30. Toeplitz-Structured Chaotic Sensing Matrix for Compressive Sensing by Lei Yu,Jean-Pierre Barbot, Gang Zheng, Hong Sun. 
  31. YALL1: Your ALgorithms for L1Random Teoplitz / Circulant / Convolution
  32. Practical compressive sensing with Toeplitz and circulant matrices byWotao Yin, Simon Morgan, Junfeng Yang, Yin Zhang.
  33. CS: A Short Discussion with Gerry Skinner, a Specialist in Coded Aperture Imaging. 
  34. A Restricted Isometry Property for Structurally-Subsampled Unitary Matrices by Waheed Bajwa, Akbar Sayeed and Robert Nowak
  35. Compressed Blind De-convolution by Venkatesh Saligrama, Manqi Zhao.
  36. Toeplitz-structured compressed sensing matrices by Waheed Bajwa, Jarvis Haupt, Gil Raz,Stephen Wright, and Robert Nowak
  37. Toeplitz Random Encoding for Reduced Acquisition Using Compressed Sensing by Haifeng Wang, Dong Liang, Kevin F. King, Leslie Ying. The attendant poster is here.
  38. Circulant and Toeplitz Matrices in Compressed Sensing by Holger Rauhut. The abstract reads:
  39. A New Algorithm for the Nearest Singular Toeplitz Matrix to a Given Toeplitz MatrixAndrew Yagle
  40. Roummel Marcia and Rebecca Willett, Compressive Coded Aperture Superresolution Image Reconstruction - additional material can be found here while the slides are here
  41. CS: The EPFL CMOS CS Imager, Compressive Sampling of Pulse Trains Spread the Spectrum !
  42. Toeplitz compressed sensing matrices with applications to sparse channel estimationJarvis Haupt,Waheed BajwaGil Raz and Robert Nowak
  43. Deterministic Designs with Deterministic Guarantees: Toeplitz Compressed Sensing Matrices, Sequence Designs and System Identification by Venkatesh Saligrama
  44. Compressed Sensing: Compressive Coded Aperture Superresolution Image Reconstruction, TOMBO, CASSI
  45. Compressed channel sensing by R. Nowak, W. Bajwa, and J. Haupt
  46. Toeplitz Block Matrices in Compressed Sensing. by Florian SebertLeslie Ying, and Yi Ming Zou 
  47. Annihilating filter-based decoding in the compressed sensing framework by Ali Hormati and Martin Vetterli "....we first denoise the signal using an iterative algorithm that finds the closest rank $k$ and Toeplitz matrix to the measurements matrix (in Frobenius norm) before applying the annihilating filter method..." 
  48. Toeplitz-structured compressed sensing matrices by Waheed Bajwa, Jarvis Haupt, Gil RazStephen Wright, and Robert Nowak
  49. Nick Trefethen pointed out that both some Random matrices and Toeplitz matrices have large pseudo-spectra: http://nuit-blanche.blogspot.fr/2007/07/random-projection-lapack.html
  50. Randomized Matrix Computations by Victor Y. PanGuoliang QianAi-Long Zheng

Tuesday, December 24, 2013

Non-convex compressive sensing for X-ray CT: an algorithm comparison


A different kind of phase transition can be seen in the figure above, either the solver fails or it breaks. When it breaks, it means the solver found a sparser solution given the measurements. : Nonconvex compressive sensing for X-ray CT: an algorithm comparison by Rick Chartrand, Emil Y. Sidky, Xiaochuan Pan
Compressive sensing makes it possible to reconstruct images from severely underdetermined linear systems. For X-ray CT, this can allow high-quality images to be reconstructed from projections along few angles, reducing patient dose, as well as enable other forms of limited -view tomography such as tomosynthesis. Many previous results have shown that using nonconvex optimization can greatly improve the results obtained from compressive sensing, and several efficient algorithms have been developed for this purpose. In this paper, we examine some recent algorithms for CT image reconstruction that solve non-convex optimization problems, and compare their reconstruction performance and computational efficiency.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, December 22, 2013

VMX: Computer Vision for Everyone, a Kickstarter campaign

Let us put some context here: With performances like 0.23% for the MNIST dataset or 2.53% for the NORB dataset, some people are already talking about Superhuman Visual Pattern Recognition. In a way the models will continue to improve. At the same time, we continue to see the steamrollers doing their job, where computations become cheaper and cheaper and can be done on the cloud. The time seems to be ripe for AI for images on the cloud.
Tomasz Malisiewicz, the man behind the tombone's computer vision blog, just graduated, founded a company (vision.ai) and is now starting a Kickstarter campaign: VMX: Computer Vision for Everyone



Here is Tomasz's introduction of the project:

Hi Igor,

We finally launched our kickstarter! We are trying to make computer vision, in particular real-time object detection and training, accessible to everyone. It would be awesome to get a short blurb with link to our kickstarter campaign on Nuit Blanche.

VMX Project: Computer Vision for Everyone
Webapp for real-time training of visual object detectors and an API for building vision-aware apps.


The VMX project was designed to bring cutting-edge computer vision technology to a very broad audience: hobbyists, researchers, artists, students, roboticists, engineers, and entrepreneurs. Not only will we educate you about potential uses of computer vision with our very own open-source vision apps, but the VMX project will give you all the tools you need to bring your own creative computer vision projects to life.


Our project video shows off our in-browser prototype in action, describes why we did everything in the browser, shows off some vision-aware apps we built and mentions why we've come to kickstarter.

The Kickstarter URL, which contains the video, listing of rewards, etc, is here:



or our short version


Thanks again,
Tomasz

Thank you Tomasz!

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 20, 2013

Compressed sensing and Approximate Message Passing with spatially-coupled Fourier and Hadamard matrices - implementation -

So it continues, Florent KrzakalaJean Barbier and Christophe Schülke have a version of their seeded 'magic matrices'  expanded to include Hadamard and Fourier blocks instead of random matrices.




Here is the paper: Compressed sensing and Approximate Message Passing with spatially-coupled Fourier and Hadamard matrices by Jean Barbier, Florent Krzakala, Christophe Schülke
We study compressed sensing of real and complex sparse signals using an optimal and computationally efficient procedure based on the combined use of approximate message-passing and spatially-coupled measurement matrices with fast Fourier and Hadamard operators. We compare the performance of our algorithm, that uses deterministic matrices, with the performance of approximate message-passing using purely random Gaussian matrices, for which the asymptotic behavior is exactly given by the density evolution. We show empirically that after proper randomization, the underlying structure of the operators does not significantly affect the performances of the reconstruction, thus allowing a fast and memory efficient reconstruction up to the information theoretic limit.
The results are as impressive as two years ago when they broke the Donoho-Tanner phase transition.
The implementation is on GitHub. The main project page is at: http://aspics.krzakala.org/.
Of related interest: Current trends in the statistical physics of disordered systems : From statistical physics to computer science by Florent Krzakala


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.