Pages

Wednesday, September 30, 2015

Training Deep Networks with Structured Layers by Matrix Backpropagation

  Extending backpropagation to include matrix factorization in deep learning, interesting, the great convergence continues:



Training Deep Networks with Structured Layers by Matrix Backpropagation by Catalin Ionescu, Orestis Vantzos, Cristian Sminchisescu

Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, September 29, 2015

Anomaly Detection in High Dimensional Data: Hyperspectral data, movies and more...


From the CAIB report featured in The Modeler's Known Unknowns and Unknown Knowns
 
Anomaly detection is when you are concerned with the "unknown unknowns" or to put it in a perspective that is currently solely missing from many algorithms: you are dealing with sometimes adversarial/evading counterparties or unexpected/outside model behaviors (outliers). There are some very sophisticated algorithms in machine learning and compressive sensing dealing with detailed classifications but when faced with unkown unknowns, you want to quantify anomaly detection or how far data is from your "frame-of-mind" model. High dimensional data afforded by cheap memory and CMOS is likely making these needles hard to find. Here are some recent preprints that showed up on my radar screen recently on the subject. And yes sparsity is sometimes key to detect them. Enjoy !




We discuss recent progress in techniques for modeling and analyzing hyperspectral images and movies, in particular for detecting plumes of both known and unknown chemicals. We discuss novel techniques for robust modeling of the background in a hyperspectral scene, and for detecting chemicals of known spectrum, we use partial least squares regression on a resampled training set to boost performance. For the detection of unknown chemicals we view the problem as an anomaly detection problem, and use novel estimators with low-sampled complexity for intrinsically low-dimensional data in high-dimensions that enable use to model the "normal" spectra and detect anomalies. We apply these algorithms to benchmark data sets made available by Lincoln Labs at the Automated Target Detection program co-funded by NSF, DTRA and NGA, and compare, when applicable, to current state-of-art algorithms, with favorable results.
Optimal Sparse Kernel Learning for Hyperspectral Anomaly Detection
Zhimin Peng, Prudhvi Gurram, Heesung Kwon, Wotao Yin

In this paper, a novel framework of sparse kernel learning for Support Vector Data Description (SVDD) based anomaly detection is presented. In this work, optimal sparse feature selection for anomaly detection is first modeled as a Mixed Integer Programming (MIP) problem. Due to the prohibitively high computational complexity of the MIP, it is relaxed into a Quadratically Constrained Linear Programming (QCLP) problem. The QCLP problem can then be practically solved by using an iterative optimization method, in which multiple subsets of features are iteratively found as opposed to a single subset. The QCLP-based iterative optimization problem is solved in a finite space called the \emph{Empirical Kernel Feature Space} (EKFS) instead of in the input space or \emph{Reproducing Kernel Hilbert Space} (RKHS). This is possible because of the fact that the geometrical properties of the EKFS and the corresponding RKHS remain the same. Now, an explicit nonlinear exploitation of the data in a finite EKFS is achievable, which results in optimal feature ranking. Experimental results based on a hyperspectral image show that the proposed method can provide improved performance over the current state-of-the-art techniques.

MultiView Diffusion Maps
Ofir Lindenbaum, Arie Yeredor, Moshe Salhov, Amir Averbuch

In this study we consider learning a reduced dimensionality representation from datasets obtained under multiple views. Such multiple views of datasets can be obtained, for example, when the same underlying process is observed using several different modalities, or measured with different instrumentation. Our goal is to effectively exploit the availability of such multiple views for various purposes, such as non-linear embedding, manifold learning, spectral clustering, anomaly detection and non-linear system identification. Our proposed method exploits the intrinsic relation within each view, as well as the mutual relations between views. We do this by defining a cross-view model, in which an implied Random Walk process between objects is restrained to hop between the different views. Our method is robust to scaling of each dataset, and is insensitive to small structural changes in the data. Within this framework, we define new diffusion distances and analyze the spectra of the implied kernels. We demonstrate the applicability of the proposed approach on both artificial and real data sets.

A Framework of Sparse Online Learning and Its Applications by Dayong Wang, Pengcheng Wu, Peilin Zhao, Steven C.H. Hoi

The amount of data in our society has been exploding in the era of big data today. In this paper, we address several open challenges of big data stream classification, including high volume, high velocity, high dimensionality, high sparsity, and high class-imbalance. Many existing studies in data mining literature solve data stream classification tasks in a batch learning setting, which suffers from poor efficiency and scalability when dealing with big data. To overcome the limitations, this paper investigates an online learning framework for big data stream classification tasks. Unlike some existing online data stream classification techniques that are often based on first-order online learning, we propose a framework of Sparse Online Classification (SOC) for data stream classification, which includes some state-of-the-art first-order sparse online learning algorithms as special cases and allows us to derive a new effective second-order online learning algorithm for data stream classification. In addition, we also propose a new cost-sensitive sparse online learning algorithm by extending the framework with application to tackle online anomaly detection tasks where class distribution of data could be very imbalanced. We also analyze the theoretical bounds of the proposed method, and finally conduct an extensive set of experiments, in which encouraging results validate the efficacy of the proposed algorithms in comparison to a family of state-of-the-art techniques on a variety of data stream classification tasks.


This paper presents a new approach, based on polynomial optimization and the method of moments, to the problem of anomaly detection. The proposed technique only requires information about the statistical moments of the normal-state distribution of the features of interest and compares favorably with existing approaches (such as Parzen windows and 1-class SVM). In addition, it provides a succinct description of the normal state. Thus, it leads to a substantial simplification of the the anomaly detection problem when working with higher dimensional datasets.


Anomaly Detection in Unstructured Environments using Bayesian Nonparametric Scene Modeling
Yogesh Girdhar, Walter Cho, Matthew Campbell, Jesus Pineda, Elizabeth Clarke, Hanumant Singh

This paper explores the use of a Bayesian non-parametric topic modeling technique for the purpose of anomaly detection in video data. We present results from two experiments. The first experiment shows that the proposed technique is automatically able characterize the underlying terrain, and detect anomalous flora in image data collected by an underwater robot. The second experiment shows that the same technique can be used on images from a static camera in a dynamic unstructured environment. The second dataset consisting of video data from a static seafloor camera, capturing images of a busy coral reef. The proposed technique was able to detect all three instances of an underwater vehicle passing in front of the camera, amongst many other observations of fishes, debris, lighting changes due to surface waves, and benthic flora.

Sparsity in Multivariate Extremes with Applications to Anomaly Detection
Nicolas Goix (LTCI), Anne Sabourin (LTCI), Stéphan Clémençon (LTCI)
(Submitted on 21 Jul 2015)
Capturing the dependence structure of multivariate extreme events is a major concern in many fields involving the management of risks stemming from multiple sources, e.g. portfolio monitoring, insurance, environmental risk management and anomaly detection. One convenient (non-parametric) characterization of extremal dependence in the framework of multivariate Extreme Value Theory (EVT) is the angular measure, which provides direct information about the probable 'directions' of extremes, that is, the relative contribution of each feature/coordinate of the 'largest' observations. Modeling the angular measure in high dimensional problems is a major challenge for the multivariate analysis of rare events. The present paper proposes a novel methodology aiming at exhibiting a sparsity pattern within the dependence structure of extremes. This is done by estimating the amount of mass spread by the angular measure on representative sets of directions, corresponding to specific sub-cones of Rd_+. This dimension reduction technique paves the way towards scaling up existing multivariate EVT methods. Beyond a non-asymptotic study providing a theoretical validity framework for our method, we propose as a direct application a --first-- anomaly detection algorithm based on multivariate EVT. This algorithm builds a sparse 'normal profile' of extreme behaviours, to be confronted with new (possibly abnormal) extreme observations. Illustrative experimental results provide strong empirical evidence of the relevance of our approach.

Universal Anomaly Detection: Algorithms and Applications
Shachar Siboni, Asaf Cohen

Modern computer threats are far more complicated than those seen in the past. They are constantly evolving, altering their appearance, perpetually changing disguise. Under such circumstances, detecting known threats, a fortiori zero-day attacks, requires new tools, which are able to capture the essence of their behavior, rather than some fixed signatures. In this work, we propose novel universal anomaly detection algorithms, which are able to learn the normal behavior of systems and alert for abnormalities, without any prior knowledge on the system model, nor any knowledge on the characteristics of the attack. The suggested method utilizes the Lempel-Ziv universal compression algorithm in order to optimally give probability assignments for normal behavior (during learning), then estimate the likelihood of new data (during operation) and classify it accordingly. The suggested technique is generic, and can be applied to different scenarios. Indeed, we apply it to key problems in computer security. The first is detecting Botnets Command and Control (C&C) channels. A Botnet is a logical network of compromised machines which are remotely controlled by an attacker using a C&C infrastructure, in order to perform malicious activities. We derive a detection algorithm based on timing data, which can be collected without deep inspection, from open as well as encrypted flows. We evaluate the algorithm on real-world network traces, showing how a universal, low complexity C&C identification system can be built, with high detection rates and low false-alarm probabilities. Further applications include malicious tools detection via system calls monitoring and data leakage identification.

Anomaly Detection for malware identification using Hardware Performance Counters by Alberto Garcia-Serrano

Computers are widely used today by most people. Internet based applications, like ecommerce or ebanking attracts criminals, who using sophisticated techniques, tries to introduce malware on the victim computer. But not only computer users are in risk, also smartphones or smartwatch users, smart cities, Internet of Things devices, etc. Different techniques has been tested against malware. Currently, pattern matching is the default approach in antivirus software. Also, Machine Learning is successfully being used. Continuing this trend, in this article we propose an anomaly based method using the hardware performance counters (HPC) available in almost any modern computer architecture. Because anomaly detection is an unsupervised process, new malware and APTs can be detected even if they are unknown.

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thesis: Fusion of Sparse Reconstruction Algorithms in Compressed Sensing, Sooraj K. Ambat - implementation -


Sooraj K. Ambat just sent me know of the following:

Dear Igor,

I am happy to let you know that I have successfully defended my Ph.D. Thesis entitled "Fusion of Sparse Reconstruction Algorithms in Compressed Sensing". Thank you very much for maintaining such a wonderful blog which updated me, on a daily basis, about the cutting edge research in Compressed Sensing and related areas during my Ph.D. days.

Inspired by your collection of codes for reproducible research, I have also released my codes publicly available. The details of the links of the codes are available in my Thesis (and below)
...
My Thesis can be downloaded from 
https://sites.google.com/site/soorajkambat/home/research/ph-d/PhDThesis_SoorajKAmbat%28April2015%29.pdf?attredirects=0&d=1

You may also find my publications and related codes for reproducible research downloadable at https://sites.google.com/site/soorajkambat/home/research/ph-d


with warm regards,
*************************************************
Thanks Sooraj, this is impressive and congratulations ! On top of the exploration of these ensemble approach, I loved reference 78 and the mention in the acknowledgments ::-) Without further ado:


Fusion of Sparse Reconstruction Algorithms in Compressed Sensing, by Sooraj K. Ambat
Compressed Sensing (CS) is a new paradigm in signal processing which exploits the sparse or compressible nature of the signal to significantly reduce the number of measurements, without compromising on the signal reconstruction quality. Recently, many algorithms have been reported in the literature for efficient sparse signal reconstruction. Nevertheless, it is well known that the performance of any sparse reconstruction algorithm depends on many parameters like number of measurements, dimension of the sparse signal, the level of sparsity, the measurement noise power, and the underlying statistical distribution of the non-zero elements of the signal. It has been observed that a satisfactory performance of the sparse reconstruction algorithm mandates certain requirement on these parameters, which is different for different algorithms. Many applications are unlikely to fulfil this requirement. For example, imaging speed is crucial in many Magnetic Resonance Imaging (MRI) applications. This restricts the number of measurements, which in turn affects the medical diagnosis using MRI. Hence, any strategy to improve the signal reconstruction in such adverse scenario is of substantial interest in CS. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in the aforementioned cases does not always imply a complete failure. That is, even in such adverse situations, a sparse reconstruction algorithm may provide partially correct information about the signal. In this thesis, we study this scenario and propose a novel fusion framework and an iterative framework which exploit the partial information available in the sparse signal estimate(s) to improve sparse signal reconstruction. The proposed fusion framework employs multiple sparse reconstruction algorithms, independently, for signal reconstruction. We first propose a fusion algorithm viz. Fusion of Algorithms for Compressed Sensing (FACS) which fuses the estimates of multiple participating algorithms in order to improve the sparse signal reconstruction. To alleviate the inherent drawbacks of FACS and further improve the sparse signal reconstruction, we propose another fusion algorithm called Committee Machine Approach for Compressed Sensing (CoMACS) and variants of CoMACS. For low latency applications, we propose a latency friendly fusion algorithm called progressive Fusion of Algorithms for Compressed Sensing (pFACS). We also extend the fusion framework to the Multiple Measurement Vector (MMV) problem and propose the extension of FACS called Multiple Measurement Vector Fusion of Algorithms for Compressed Sensing (MMV-FACS). We theoretically analyse the proposed fusion algorithms and derive guarantees for performance improvement. We also show that the proposed fusion algorithms are robust against both signal and measurement perturbations. Further, we demonstrate the efficacy of the proposed algorithms via numerical experiments: (i) using sparse signals with different statistical distributions in noise-free and noisy scenarios, and (ii) using real-world ECG signals. The extensive numerical experiments show that, for a judicious choice of the participating algorithms, the proposed fusion algorithms result in a sparse signal estimate which is often better than the sparse signal estimate of the best participating algorithm. The proposed fusion framework requires to employ multiple sparse reconstruction algorithms for sparse signal reconstruction. We also propose an iterative framework and algorithm called Iterative Framework for Sparse Reconstruction Algorithms (IFSRA) to improve the performance of a given arbitrary sparse reconstruction algorithm. We theoretically analyse IFSRA and derive convergence guarantees under signal and measurement perturbations. Numerical experiments on synthetic and real-world data confirm the efficacy of IFSRA. The proposed fusion algorithms and IFSRA are general in nature and does not require any modification in the participating algorithm(s).



Credits: NASA/JPL/University of Arizona
 These dark, narrow, 100 meter-long streaks called recurring slope lineae flowing downhill on Mars are inferred to have been formed by contemporary flowing water. Recently, planetary scientists detected hydrated salts on these slopes at Hale crater, corroborating their original hypothesis that the streaks are indeed formed by liquid water. The blue color seen upslope of the dark streaks are thought not to be related to their formation, but instead are from the presence of the mineral pyroxene. The image is produced by draping an orthorectified (Infrared-Red-Blue/Green(IRB)) false color image (ESP_030570_1440) on a Digital Terrain Model (DTM) of the same site produced by High Resolution Imaging Science Experiment (University of Arizona). Vertical exaggeration is 1.5.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, September 28, 2015

Data: MindBigData, the "MNIST" of Brain Digits

Wow, I love it when I see very different types of datasets. This following one seems very interesting and is from David Vivancos. It's an Open Dataset of "Brain Digits" , it is at: http://www.mindbigdata.com/opendb  . This reminds of a discussion we had a while back about compressive EEG systems. From the page (the page has actual links to the datasets):
 
MindBigData
The "MNIST" of Brain Digits
The version 1.03 of the open database contains 1,183,368 brain signals of 2 seconds each, captured with the stimulus of seeing  a digit (from 0 to 9) and thinking about it, over the course of almost 2 years between 2014 & 2015, from a single Test Subject David Vivancos.
All the signals have been captured using commercial EEGs (not medical grade), NeuroSky MindWave, Emotiv EPOC, Interaxon Muse & Emotiv Insight, covering a total of 19 Brain (10/20) locations.
......

We built our own tools to capture them, but there is no post-processing on our side, so they come raw as they are read from each EEG device, in total 389,519,941 Data Points.
Feel free to test any machine learning, deep learning or whatever algorithm you think it could fit, we only ask for acknowledging the source and please let us know of your performance!

........
BRAIN LOCATIONS:
Each EEG device capture the signals via different sensors, located in these areas of my brain, the color represents the device:    MindWave, EPOC, Muse, Insight
David Vivancos Brain 10/20 Locations
Contact us if you need any more info.
Let's decode My Brain!
September 2015
David Vivancos
vivancos@vivancos.com

This MindBigData The "MNIST" of Brain Digits is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tensorizing Neural Networks - implementation -

From Robert Dionne's contribution to GitXiv, here a recent NIPS accepted paper using the QTT format to "compress" neural network architecture:
 
 
 
Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.
 The implementation is here.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, September 26, 2015

Saturday Morning Videos: IPAM workshop on Computational Photography and Intelligent Cameras

 
 
Here are the videos of the IPAM workshop on Computational Photography and Intelligent Cameras (I haven't had much luck with firefox, but chrome seems to be ok with the low resolution videos):




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 25, 2015

Metric Learning: Random Projections and Deep Learning Style


Towards Making High Dimensional Distance Metric Learning Practical by Qi Qian, Rong Jin, Lijun Zhang, Shenghuo Zhu
In this work, we study distance metric learning (DML) for high dimensional data. A typical approach for DML with high dimensional data is to perform the dimensionality reduction first before learning the distance metric. The main shortcoming of this approach is that it may result in a suboptimal solution due to the subspace removed by the dimensionality reduction method. In this work, we present a dual random projection frame for DML with high dimensional data that explicitly addresses the limitation of dimensionality reduction for DML. The key idea is to first project all the data points into a low dimensional space by random projection, and compute the dual variables using the projected vectors. It then reconstructs the distance metric in the original space using the estimated dual variables. The proposed method, on one hand, enjoys the light computation of random projection, and on the other hand, alleviates the limitation of most dimensionality reduction methods. We verify both empirically and theoretically the effectiveness of the proposed algorithm for high dimensional DML.

Geometry-aware Deep Transform by Jiaji Huang, Qiang Qiu, Robert Calderbank, Guillermo Sapiro
Many recent efforts have been devoted to designing sophisticated deep learning structures, obtaining revolutionary results on benchmark datasets. The success of these deep learning methods mostly relies on an enormous volume of labeled training samples to learn a huge number of parameters in a network; therefore, understanding the generalization ability of a learned deep network cannot be overlooked, especially when restricted to a small training set, which is the case for many applications. In this paper, we propose a novel deep learning objective formulation that unifies both the classification and metric learning criteria. We then introduce a geometry-aware deep transform to enable a non-linear discriminative and robust feature transform, which shows competitive performance on small training sets for both synthetic and real-world data. We further support the proposed framework with a formal $(K,\epsilon)$-robustness analysis.


Image Credit: NASA/JPL-Caltech
 This image was taken by Front Hazcam: Right B (FHAZ_RIGHT_B) onboard NASA's Mars rover Curiosity on Sol 1114 (2015-09-24 22:07:08 UTC).

Full Resolution
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, September 24, 2015

Discovering governing equations from data: Sparse identification of nonlinear dynamical systems - implementation -

Interesting set of experiments (check the annex) !

The ability to discover physical laws and governing equations from data is one of humankind's greatest intellectual achievements. A quantitative understanding of dynamic constraints and balances in nature has facilitated rapid development of knowledge and enabled advanced technological achievements, including aircraft, combustion engines, satellites, and electrical power. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing physical equations from measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. The resulting models are parsimonious, balancing model complexity with descriptive ability while avoiding overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized, time-varying, or externally forced systems.
The code is in the paper.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Book: Information Entropy and their Geometric Structures

 
 
Frederic Babaresco sent me the following link to this free book entitled "Information Entropy and their Geometric Structures" from the abstract:
The aim of this book is to provide an overview of current works addressing the topics of research that explore the geometric structures of information and entropy. The papers in this book includes the extended versions of a selection of the paper published in Proceedings of the 34th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2014), Amboise, France, 21–26 September 2014. Chapter 1 of the book is a historical review of the origins of thermodynamics and information theory. Chapter 2 discusses the mathematical and physical foundations of geometric structures related to information and entropy. Lastly, Chapter 3 is dedicated to applications with numerical schemes for geometric structures of information and entropy.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Jobs: Center for Advanced Mathematics for Energy Research Applications (CAMERA)

 
 
Stefano Marchesini  just sent me the following:

Dear Igor,  
We had some good news from DOE office of Science to support and expand the Center for Advanced Mathematics for Energy Research Applications (CAMERA):http://newscenter.lbl.gov/2015/09/22/new-support-for-camera-to-develop-computational-mathematics-for-experimental-facilities-research/

I refer to the news release and our website camera.lbl.gov for a detailed description of all the projects. CAMERA’s mission is to develop fundamental mathematics and algorithms, delivered as data analysis software that can accelerate scientific discovery at the advanced scientific user facilities of DOE office of science.

I was hoping you could point to your vast and skilled readership that CAMERA has positions and visiting opportunities available across a collection of disciplines to meet data analysis challenges. Particularly valuable are cross-disciplinary skills blending combinations of such fields as-Applied mathematics, including statistics and machine learning.
  • Computational science.
  • Signal and image processing.
  • Scientific research at experimental facilities and laboratories.
  • Software engineering.

Further information about CAMERA, current projects, future expansion, and engagement opportunities may be found at camera.lbl.gov. contact information camera@math.lbl.gov

I'd be happy to answer questions about some of these opportunities as well, though not in all areas, primarily phase retrieval, inverse problems, parallel computing, x-ray science and technology, my contact below.

--

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit www.lbl.gov.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information about DOE’s Office of Science please visit science.energy.gov.

--


Best, Stefano

|^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-
Stefano Marchesini http://smarchesini.lbl.gov
CAMERA, Advanced Light Source, MS 2-400
Lawrence Berkeley National Laboratory
1 Cyclotron Rd. Berkeley, CA 94720-8199, USA
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, September 23, 2015

ASimCo: Analysis SimCO Algorithms for Sparse Analysis Model Based Dictionary Learning - implementation -

 
 
In this paper, we consider the dictionary learning problem for the sparse analysis model. A novel algorithm is proposed by adapting the simultaneous codeword optimization (SimCO) algorithm, based on the sparse synthesis model, to the sparse analysis model. This algorithm assumes that the analysis dictionary contains unit ℓ2-norm atoms and learns the dictionary by optimization on manifolds. This framework allows multiple dictionary atoms to be updated simultaneously in each iteration. However, similar to several existing analysis dictionary learning algorithms, dictionaries learned by the proposed algorithm may contain similar atoms, leading to a degenerate (coherent) dictionary. To address this problem, we also consider restricting the coherence of the learned dictionary and propose Incoherent Analysis SimCO by introducing an atom decorrelation step following the update of the dictionary. We demonstrate the competitive performance of the proposed algorithms using experiments with synthetic data and image denoising as compared with existing algorithms. 
An implementation is on Wenwu Wang's code page.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, September 22, 2015

LSS: Learning the Structure for Structured Sparsity - implementation -

 
 
Learning the Structure for Structured Sparsity by  Nino Shervashidze and Francis Bach
Structured sparsity has recently emerged in statistics, machine learning and signal processing as a promising paradigm for learning in high-dimensional settings. All existing methods for learning under the assumption of structured sparsity rely on prior knowledge on how to weight (or how to penalize) individual subsets of variables during the subset selection process, which is not available in general. Inferring group weights from data is a key open research problem in structured sparsity. In this paper, we propose a Bayesian approach to the problem of group weight learning. We model the group weights as hyperparameters of heavy-tailed priors on groups of variables and derive an approximate inference scheme to infer these hyperparameters. We empirically show that we are able to recover the model hyperparameters when the data are generated from the model, and we demonstrate the utility of learning weights in synthetic and real denoising problems.

The LSS package is here
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.