*Hx = N(Ax)*. There, we are not interested in image reconstruction but rather the reconstruction of a specific item in a series of measurements (the moving objects).

We present a compressive sensing protocol that tracks a moving object by removing static components from a scene. The implementation is carried out on a ghost imaging scheme to minimize both the number of photons and the number of measurements required to form a quantum image of the tracked object. This procedure tracks an object at low light levels with fewer than 3% of the measurements required for a raster scan, permitting us to more effectively use the information content in each photon.

Next, we have the typical indirect imaging of the coded aperture of X-ray observatories ( x = L(Ax) ). Let us note the reference to potential use of compressive sensing. It is just a question of time before they come to our side of the Force.

L. Bouchet, P. Amestoy, A. Buttari, F.-H. Rouet, M. Chauvin

Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X-gamma-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

Next we have some interesting phase retrieval imaging of the type:

*x = N2(N1(x))*Since its discovery, the "ghost" diffraction phenomenon has emerged as a non-conventional technique for optical imaging with very promising advantages. However, extracting intensity and phase information of a structured and realistic object remains a challenge. Here, we show that a "ghost" hologram can be recorded with a single-pixel configuration by adapting concepts from standard digital holography. The presented homodyne scheme enables phase imaging with nanometric depth resolution, three-dimensional focusing ability, and shows high signal-to-noise ratio.

Here again we have sensing of the type:

*Hx = N(Ax)*Silicon-based micro and nanoparticles have gained popularity in a wide range of biomedical applications due to their biocompatibility and biodegradability in-vivo, as well as a flexible surface chemistry, which allows drug loading, functionalization and targeting. Here we report direct in-vivo imaging of hyperpolarized 29Si nuclei in silicon microparticles by MRI. Natural physical properties of silicon provide surface electronic states for dynamic nuclear polarization (DNP), extremely long depolarization times, insensitivity to the in-vivo environment or particle tumbling, and surfaces favorable for functionalization. Potential applications to gastrointestinal, intravascular, and tumor perfusion imaging at sub-picomolar concentrations are presented. These results demonstrate a new background-free imaging modality applicable to a range of inexpensive, readily available, and biocompatible Si particles.

and some indirect imaging of the type:

*x = N(Ax)*

For cone-beam three-dimensional computed tomography (3D-CT) scanning system, voxel size is an important indicator to guarantee the accuracy of data analysis and feature measurement based on 3D-CT images. Meanwhile, the voxel size changes with the movement of the rotary table along X-ray direction. In order to realize the automatic calibration of the voxel size, a new easily-implemented method is proposed. According to this method, several projections of a spherical phantom are captured at different imaging positions and the corresponding voxel size values are calculated by non-linear least square fitting. Through these interpolation values, a linear equation is obtained, which reflects the relationship between the rotary table displacement distance from its nominal zero position and the voxel size. Finally, the linear equation is imported into the calibration module of the 3D-CT scanning system, and when the rotary table is moving along X-ray direction, the accurate value of the voxel size is dynamically exported. The experimental results prove that this method meets the requirements of the actual CT scanning system, and has virtues of easy implementation and high accuracy

There is a nice presentation on deep learning where you can read about sparse autoencoders: Deep Learning for NLP (without Magic) by Richard Socher and Christopher Manning

ABSTRACT

Machine learning is everywhere in today's NLP, but by and large machine learning amounts to numerical optimization of weights for human designed representations and features. The goal of deep learning is to explore how computers can take advantage of data to develop features and representations appropriate for complex interpretation tasks. This tutorial aims to cover the basic motivation, ideas, models and learning algorithms in deep learning for natural language processing. Recently, these methods have been shown to perform very well on various NLP tasks such as language modeling, POS tagging, named entity recognition, sentiment analysis and paraphrase detection, among others. The most attractive quality of these techniques is that they can perform well without any external hand-designed resources or time-intensive feature engineering. Despite these advantages, many researchers in NLP are not familiar with these methods. Our focus is on insight and understanding, using graphical illustrations and simple, intuitive derivations. The goal of the tutorial is to make the inner workings of these techniques transparent, intuitive and their results interpretable, rather than black boxes labeled "magic here". The first part of the tutorial presents the basics of neural networks, neural word vectors, several simple models based on local windows and the math and algorithms of training via backpropagation. In this section applications include language modeling and POS tagging. In the second section we present recursive neural networks which can learn structured tree outputs as well as vector representations for phrases and sentences. We cover both equations as well as applications. We show how training can be achieved by a modified version of the backpropagation algorithm introduced before. These modifications allow the algorithm to work on tree structures. Applications include sentiment analysis and paraphrase detection. We also draw connections to recent work in semantic compositionality in vector spaces. The principle goal, again, is to make these methods appear intuitive and interpretable rather than mathematically confusing. By this point in the tutorial, the audience members should have a clear understanding of how to build a deep learning system for word-, sentence- and document-level tasks. The last part of the tutorial gives a general overview of the different applications of deep learning in NLP, including bag of words models. We will provide a discussion of NLP-oriented issues in modeling, interpretation, representational power, and optimization.

Finally, Andrew Ng let us know we can perform the sparse autoencoder of the type

*x = N4(N3(N2(N1(x))))*that produced the cat experiment with off the shelf computer gear (COTS) in Deep learning with COTS HPC systems by Adam Coates, Brody Huval , Tao Wang , David J. Wu , Andrew Ng

Scaling up deep learning algorithms has been shown to lead to increased performance in benchmark tasks and to enable discovery of complex high-level features. Recent e orts to train extremely large networks (with over 1 billion parameters) have relied on cloud-like computing infrastructure and thousands of CPU cores. In this paper, we present technical details and results from our own system based on Commodity O -The-Shelf High Performance Computing (COTS HPC) technology: a cluster of GPU servers with In niband interconnects and MPI. Our system is able to train 1 billion parameter networks on just 3 machines in a couple of days, and we show that it can scale to networks with over 11 billion parameters using just 16 machines. As this infrastructure is much more easily marshaled by others, the approach enables much wider-spread research with extremely large neural networks.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment