Showing posts with label hyperspectral. Show all posts
Showing posts with label hyperspectral. Show all posts

Tuesday, May 14, 2019

Compressive Single-pixel Fourier Transform Imaging using Structured Illumination

** Nuit Blanche is now on Twitter: @NuitBlog **


Here is some new CS related hardware 


Single Pixel (SP) imaging is now a reality in many applications, e.g., biomedical ultrathin endoscope and fluorescent spectroscopy. In this context, many schemes exist to improve the light throughput of these device, e.g., using structured illumination driven by compressive sensing theory. In this work, we consider the combination of SP imaging with Fourier Transform Interferometry (SP-FTI) to reach high-resolution HyperSpectral (HS) imaging, as desirable, e.g., in fluorescent spectroscopy. While this association is not new, we here focus on optimizing the spatial illumination, structured as Hadamard patterns, during the optical path progression. We follow a variable density sampling strategy for space-time coding of the light illumination, and show theoretically and numerically that this scheme allows us to reduce the number of measurements and light-exposure of the observed object compared to conventional compressive SP-FTI.
Related: Single Pixel Hyperspectral Imaging using Fourier Transform Interferometry


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Monday, July 18, 2016

An approximate message passing approach for compressive hyperspectral imaging using a simultaneous low-rank and joint-sparsity prior

ah ! Using AMP in hyperspectral imaging with sparsity and low rank, this was much needed.
 


An approximate message passing approach for compressive hyperspectral imaging using a simultaneous low-rank and joint-sparsity prior by Yangqing Li, Saurabh Prasad, Wei Chen, Changchuan Yin, Zhu Han

This paper considers a compressive sensing (CS) approach for hyperspectral data acquisition, which results in a practical compression ratio substantially higher than the state-of-the-art. Applying simultaneous low-rank and joint-sparse (L&S) model to the hyperspectral data, we propose a novel algorithm to joint reconstruction of hyperspectral data based on loopy belief propagation that enables the exploitation of both structured sparsity and amplitude correlations in the data. Experimental results with real hyperspectral datasets demonstrate that the proposed algorithm outperforms the state-of-the-art CS-based solutions with substantial reductions in reconstruction error.
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, May 13, 2016

Follow-up to 'Making Hyperspectral Imaging Mainstream'

Following up on this morning hyperspectral approach, here are some feedback from my proposal about Making Hyperspectral Imaging Mainstream?
 

Do you remember it ? No, well go read it, I'll wait....  Ximea, the maker of hyperspectral cameras, is still pondering the issue but I got a few very good interactions out of that idea. Here they are:

From the blog comment section:
Harrison Knoll said...
This is a great idea! We here at Aerial Agriculture have been collecting hyperspectral data and will be following your progress. Let us know if there is anything you need! ~Harrison
 
Someone from Movidius  said...
Great idea - Movidius would be very supportive
Alex St. John said...
Agreed, hyperspectral space is the place for next-level analysis!

Also from Aerial Agriculture here, and am interested to follow up on this kind of project and continue building.

Alex

Indir Jaganjac 

Igor, see these hyperspectral images of natural scenes at Manchaster University site: http://personalpages.manchester.ac.uk/staff/david.foster/Hyperspectral_images_of_natural_scenes_04.html. Scenes were illuminated by direct sunlight in clear or almost clear sky. Estimated reflectance spectra (effective spectral reflectances) at each pixel in each of scenes images can be downloaded ((1017x1338x33 Matlab array). Hyperspectral imaging system that was used to acquire scene reflectances was based on low-noise Peltier-cooled digital camera providing spatial resolution of 1344x1024 pixels (Hamamatsu, model C4742-95-12ER) with fast tunable liquid-crystal filter. 

Kyle Forbes 
Experienced Software and Data Leader
That's why I started www.agrolytic.com, leveraging machine learning with hyper spectral and other spatial data to address information challenges in agriculture. 

Very interesting !

 
All hyperspectral related blog entries are under the hyperspectral tag.

In other news, here is: Image-level Classification in Hyperspectral Images using Feature Descriptors, with Application to Face Recognition  by Vivek Sharma, Luc Van Gool

In this paper, we proposed a novel pipeline for image-level classification in the hyperspectral images. By doing this, we show that the discriminative spectral information at image-level features lead to significantly improved performance in a face recognition task. We also explored the potential of traditional feature descriptors in the hyperspectral images. From our evaluations, we observe that SIFT features outperform the state-of-the-art hyperspectral face recognition methods, and also the other descriptors. With the increasing deployment of hyperspectral sensors in a multitude of applications, we believe that our approach can effectively exploit the spectral information in hyperspectral images, thus beneficial to more accurate classification.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Hyperspectral Blind Reconstruction From Random Spectral Projections

Devising the right hardware for hyperspectral imaging thanks to a clear path from reality to image reconstruction is what the authors of the following paper enable:




This paper proposes a blind hyperspectral reconstruction technique termed spectral compressive acquisition (SpeCA) conceived to spaceborne sensors systems which are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. SpeCA exploits the fact that hyperspectral vectors often belong to a low-dimensional subspace and it is blind in the sense that the subspace is learned from the measured data. SpeCA encoder is computationally very light; it just computes random projections (RPs) with the acquired spectral vectors. SpeCA decoder solves a form of blind reconstruction from RPs whose complexity, although higher than that of the encoder, is very light in the sense that it requires only the modest resources to be implemented in real time. SpeCA coding/decoding scheme achieves perfect reconstruction in noise-free hyperspectral images (HSIs) and is very competitive in noisy data. The effectiveness of the proposed methodology is illustrated in both synthetic and real scenarios.
earlier:
Robust Collaborative Nonnegative Matrix Factorization For Hyperspectral Unmixing (R-CoNMF)
Jun Li, Jose M. Bioucas-Dias, Antonio Plaza, Lin Liu

The recently introduced collaborative nonnegative matrix factorization (CoNMF) algorithm was conceived to simultaneously estimate the number of endmembers, the mixing matrix, and the fractional abundances from hyperspectral linear mixtures. This paper introduces R-CoNMF, which is a robust version of CoNMF. The robustness has been added by a) including a volume regularizer which penalizes the distance to a mixing matrix inferred by a pure pixel algorithm; and by b) introducing a new proximal alternating optimization (PAO) algorithm for which convergence to a critical point is guaranteed. Our experimental results indicate that R-CoNMF provides effective estimates both when the number of endmembers are unknown and when they are known.



 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, March 30, 2016

Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

Ah! here comes the Tsunami. Multispectral was fine and back in 2007 we already noted that hyperspectral imaging (Hyperion on EO-1) overloaded distribution channels such as TDRSS, (thereby elevating the issue of Making Hyperspectral Imaging Mainstream ). Just imagine what can be done if instead of 10 or 200 spectral bands, you could get 1000 spectral bands on a CubeSat ? We may not be far from this reality according to today's entry, woohoo !





Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder by Isaac August, Yaniv Oiknine, Marwan AbuLeil, Ibrahim Abdulhalim, and Adrian Stern

Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

 and yes, there is also the issue of Making Hyperspectral Imaging Mainstream

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 18, 2015

Hamming's Time: Making Hyperspectral Imaging Mainstream

Friday afternoon is Hamming's time. Today I decided to compete in the Best Camera Application contest of XIMEA, a maker of small hyperspectral cameras. Here is my entry:


Challenging task: Make hyperspectral imaging mainstream

Idea: Create a large database of hyperspectral imagery for use in Machine/Deep Learning Competitions



Background

Machine Learning is the field concerned with creating, training and using algorithms dedicated to making  sense of data. These algorithms are taking advantage of training data (images, videos) as a way of improving for tasks such as detection, classification, etc. In recent years, we have witnessed a spectacular growth in this field thanks to the joint availability of large datasets originating from the internet and the attendant curating/labeling efforts of said images and videos.

Numerous labeled datasets available such as CIFAR [1], Imagenet [2], etc. routinely permit algorithms of increased complexity to be developed and compete in state of the art classification contests. For instance, the rise of deep learning algorithms comes from breaking all the state-of-the-art classification results in the “ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry” [3] More  recent examples of this heated competition results were recently shown at the NIPS conference  last week where teams at Microsoft Research produced breakthroughs in classification with an astounding 152 layer neural networks [4]. This intense competition between highly capable teams at universities and large internet companies is only possible because some large amount of training data is being made available.

Image or even video processing for hyperspectral imagery cannot follow the development of image processing that occurred for the past 40 years. The underlying reason stems from the fact that this development was performed at considerable expense by companies and governments alike and eventually yielded standards such as Jpegs, gif, Jpeg2000, mpeg, etc…Because such funding is no longer available we need to find ways of improving and working with new imaging modalities.
Technically, since hyperspectral imagery is still a niche market, most analysis performed in this field runs the risk of being seen as an outgrowth of normal imagery: i.e substandards tools such as JPEG or labor intensive computer vision tools are being used to classify and use this imagery without much thought into using the additional structure of the spectrum information. More sophisticated tools such as advanced matrix factorization (NMF, PCA, Sparse PCA, Dictionary learning, ….) in turn focus on the spectral information but seldomly use the spatial information. Both approaches suffer from not investigating more fully the inherent robust structure of this imagery.  

For hyperspectral imagery to become mainstream, algorithms for compression and for its day-to-day use has to take advantage of the current very active and highly competitive development in Machine Learning algorithms. In short, creating large and rich hyperspectral imagery datasets beyond what is currently available ([5-8] is central for this technology to grow out its niche markets and become central in our everyday lives.



The proposal

In order to make hyperspectral imagery mainstream, I propose to use a XIMEA camera and shoot imagery and video of different objects, locations and label these datasets.

The datasets will then be made available on the internet for use by parties interested in performing classification competition based on them (Kaggle, academic competitions,...).

As a co-organizer of the meetup, I also intend on enlisting some of the folks in the Paris Machine Learning meetup group ( with close to 3000 members it is one of the largest Machine Learning meetup in the world [9]) to help in enriching this dataset.

The dataset should be available from servers probably colocated at a university or some non-profit organization (to be identified). A report presenting the dataset should be eventually academically citable.



References
[2] Imagenet dataset, http://www.image-net.org/
[3] ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
[8] Parraga CA, Brelstaff G, Troscianko T, Moorhead IR, Journal of the Optical Society of America 15 (3): 563-569, 1998 or G. Brelstaff, A. Párraga, T. Troscianko and D. Carr, SPIE. Vol. 2587. Geog. Inf. Sys. Photogram. and Geolog./Geophys. Remote Sensing, 150-159, 1995,


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 11, 2015

Perfect Recovery Conditions For Non-Negative Sparse Modeling / Compressive hyperspectral imaging via adaptive sampling and dictionary learning

 


Perfect Recovery Conditions For Non-Negative Sparse Modeling by Yuki Itoh, Marco F. Duarte, Mario Parente

Sparse modeling has been widely and successfully used in many applications such as computer vision, machine learning, and pattern recognition and, accompanied with those applications, significant research has studied the theoretical limits and algorithm design for convex relaxations in sparse modeling. However, only little has been done for theoretical limits of non-negative versions of sparse modeling. The behavior is expected to be similar as the general sparse modeling, but a precise analysis has not been explored. This paper studies the performance of non-negative sparse modeling, especially for non-negativity constrained and 1-penalized least squares, and gives an exact bound for which this problem can recover the correct signal elements. We pose two conditions to guarantee the correct signal recovery: minimum coefficient condition (MCC) and non-linearity vs. subset coherence condition (NSCC). The former defines the minimum weight for each of the correct atoms present in the signal and the latter defines the tolerable deviation from the linear model relative to the positive subset coherence (PSC), a novel type of "coherence" metric. We provide rigorous performance guarantees based on these conditions and experimentally verify their precise predictive power in a hyperspectral data unmixing application.
 
 

In this paper, we propose a new sampling strategy for hyperspectral signals that is based on dictionary learning and singular value decomposition (SVD). Specifically, we first learn a sparsifying dictionary from training spectral data using dictionary learning. We then perform an SVD on the dictionary and use the first few left singular vectors as the rows of the measurement matrix to obtain the compressive measurements for reconstruction. The proposed method provides significant improvement over the conventional compressive sensing approaches. The reconstruction performance is further improved by reconditioning the sensing matrix using matrix balancing. We also demonstrate that the combination of dictionary learning and SVD is robust by applying them to different datasets.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, November 09, 2015

GURLS vs LIBSVM: Performance Comparison of Kernel Methods for Hyperspectral Image Classification

A while back, the study was made comparing random features to scattering networks features for hyperspectral imagery. This time, the authors look at the difference between using LIBSVM and GURLS (with an implementation of random features) when performing classification in that field.
 



GURLS vs LIBSVM: Performance Comparison of Kernel Methods for Hyperspectral Image Classification by  Nikhila Haridas , V. Sowmya , K. P. Soman 
Kernel based methods have emerged as one of the most promising techniques for Hyper Spectral Image classification and has attracted extensive research efforts in recent years. This paper introduces a new kernel based framework for Hyper Spectral Image (HSI) classification using Grand Unified Regularized Least Squares (GURLS) library. The proposed work compares the performance of different kernel methods available in GURLS package with the library for Support Vector Machines namely, LIBSVM. The assessment is based on HSI classification accuracy measures and computation time. The experiment is performed on two standard Hyper Spectral datasets namely, Salinas A and Indian Pines subset captured by AVIRIS (Airborne Visible Infrared Imaging Spectrometer) sensor. From the analysis, it is observed that GURLS library is competitive to LIBSVM in terms of its prediction accuracy whereas computation time seems to favor LIBSVM. The major advantage of GURLS toolbox over LIBSVM is its simplicity, ease of use, automatic parameter selection and fast training and tuning of multi-class classifier. Moreover, GURLS package is provided with an implementation of Random Kitchen Sink algorithm, which can easily handle high dimensional Hyper Spectral Images at much lower computational cost than LIBSVM.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, September 29, 2015

Anomaly Detection in High Dimensional Data: Hyperspectral data, movies and more...


From the CAIB report featured in The Modeler's Known Unknowns and Unknown Knowns
 
Anomaly detection is when you are concerned with the "unknown unknowns" or to put it in a perspective that is currently solely missing from many algorithms: you are dealing with sometimes adversarial/evading counterparties or unexpected/outside model behaviors (outliers). There are some very sophisticated algorithms in machine learning and compressive sensing dealing with detailed classifications but when faced with unkown unknowns, you want to quantify anomaly detection or how far data is from your "frame-of-mind" model. High dimensional data afforded by cheap memory and CMOS is likely making these needles hard to find. Here are some recent preprints that showed up on my radar screen recently on the subject. And yes sparsity is sometimes key to detect them. Enjoy !




We discuss recent progress in techniques for modeling and analyzing hyperspectral images and movies, in particular for detecting plumes of both known and unknown chemicals. We discuss novel techniques for robust modeling of the background in a hyperspectral scene, and for detecting chemicals of known spectrum, we use partial least squares regression on a resampled training set to boost performance. For the detection of unknown chemicals we view the problem as an anomaly detection problem, and use novel estimators with low-sampled complexity for intrinsically low-dimensional data in high-dimensions that enable use to model the "normal" spectra and detect anomalies. We apply these algorithms to benchmark data sets made available by Lincoln Labs at the Automated Target Detection program co-funded by NSF, DTRA and NGA, and compare, when applicable, to current state-of-art algorithms, with favorable results.
Optimal Sparse Kernel Learning for Hyperspectral Anomaly Detection
Zhimin Peng, Prudhvi Gurram, Heesung Kwon, Wotao Yin

In this paper, a novel framework of sparse kernel learning for Support Vector Data Description (SVDD) based anomaly detection is presented. In this work, optimal sparse feature selection for anomaly detection is first modeled as a Mixed Integer Programming (MIP) problem. Due to the prohibitively high computational complexity of the MIP, it is relaxed into a Quadratically Constrained Linear Programming (QCLP) problem. The QCLP problem can then be practically solved by using an iterative optimization method, in which multiple subsets of features are iteratively found as opposed to a single subset. The QCLP-based iterative optimization problem is solved in a finite space called the \emph{Empirical Kernel Feature Space} (EKFS) instead of in the input space or \emph{Reproducing Kernel Hilbert Space} (RKHS). This is possible because of the fact that the geometrical properties of the EKFS and the corresponding RKHS remain the same. Now, an explicit nonlinear exploitation of the data in a finite EKFS is achievable, which results in optimal feature ranking. Experimental results based on a hyperspectral image show that the proposed method can provide improved performance over the current state-of-the-art techniques.

MultiView Diffusion Maps
Ofir Lindenbaum, Arie Yeredor, Moshe Salhov, Amir Averbuch

In this study we consider learning a reduced dimensionality representation from datasets obtained under multiple views. Such multiple views of datasets can be obtained, for example, when the same underlying process is observed using several different modalities, or measured with different instrumentation. Our goal is to effectively exploit the availability of such multiple views for various purposes, such as non-linear embedding, manifold learning, spectral clustering, anomaly detection and non-linear system identification. Our proposed method exploits the intrinsic relation within each view, as well as the mutual relations between views. We do this by defining a cross-view model, in which an implied Random Walk process between objects is restrained to hop between the different views. Our method is robust to scaling of each dataset, and is insensitive to small structural changes in the data. Within this framework, we define new diffusion distances and analyze the spectra of the implied kernels. We demonstrate the applicability of the proposed approach on both artificial and real data sets.

A Framework of Sparse Online Learning and Its Applications by Dayong Wang, Pengcheng Wu, Peilin Zhao, Steven C.H. Hoi

The amount of data in our society has been exploding in the era of big data today. In this paper, we address several open challenges of big data stream classification, including high volume, high velocity, high dimensionality, high sparsity, and high class-imbalance. Many existing studies in data mining literature solve data stream classification tasks in a batch learning setting, which suffers from poor efficiency and scalability when dealing with big data. To overcome the limitations, this paper investigates an online learning framework for big data stream classification tasks. Unlike some existing online data stream classification techniques that are often based on first-order online learning, we propose a framework of Sparse Online Classification (SOC) for data stream classification, which includes some state-of-the-art first-order sparse online learning algorithms as special cases and allows us to derive a new effective second-order online learning algorithm for data stream classification. In addition, we also propose a new cost-sensitive sparse online learning algorithm by extending the framework with application to tackle online anomaly detection tasks where class distribution of data could be very imbalanced. We also analyze the theoretical bounds of the proposed method, and finally conduct an extensive set of experiments, in which encouraging results validate the efficacy of the proposed algorithms in comparison to a family of state-of-the-art techniques on a variety of data stream classification tasks.


This paper presents a new approach, based on polynomial optimization and the method of moments, to the problem of anomaly detection. The proposed technique only requires information about the statistical moments of the normal-state distribution of the features of interest and compares favorably with existing approaches (such as Parzen windows and 1-class SVM). In addition, it provides a succinct description of the normal state. Thus, it leads to a substantial simplification of the the anomaly detection problem when working with higher dimensional datasets.


Anomaly Detection in Unstructured Environments using Bayesian Nonparametric Scene Modeling
Yogesh Girdhar, Walter Cho, Matthew Campbell, Jesus Pineda, Elizabeth Clarke, Hanumant Singh

This paper explores the use of a Bayesian non-parametric topic modeling technique for the purpose of anomaly detection in video data. We present results from two experiments. The first experiment shows that the proposed technique is automatically able characterize the underlying terrain, and detect anomalous flora in image data collected by an underwater robot. The second experiment shows that the same technique can be used on images from a static camera in a dynamic unstructured environment. The second dataset consisting of video data from a static seafloor camera, capturing images of a busy coral reef. The proposed technique was able to detect all three instances of an underwater vehicle passing in front of the camera, amongst many other observations of fishes, debris, lighting changes due to surface waves, and benthic flora.

Sparsity in Multivariate Extremes with Applications to Anomaly Detection
Nicolas Goix (LTCI), Anne Sabourin (LTCI), Stéphan Clémençon (LTCI)
(Submitted on 21 Jul 2015)
Capturing the dependence structure of multivariate extreme events is a major concern in many fields involving the management of risks stemming from multiple sources, e.g. portfolio monitoring, insurance, environmental risk management and anomaly detection. One convenient (non-parametric) characterization of extremal dependence in the framework of multivariate Extreme Value Theory (EVT) is the angular measure, which provides direct information about the probable 'directions' of extremes, that is, the relative contribution of each feature/coordinate of the 'largest' observations. Modeling the angular measure in high dimensional problems is a major challenge for the multivariate analysis of rare events. The present paper proposes a novel methodology aiming at exhibiting a sparsity pattern within the dependence structure of extremes. This is done by estimating the amount of mass spread by the angular measure on representative sets of directions, corresponding to specific sub-cones of Rd_+. This dimension reduction technique paves the way towards scaling up existing multivariate EVT methods. Beyond a non-asymptotic study providing a theoretical validity framework for our method, we propose as a direct application a --first-- anomaly detection algorithm based on multivariate EVT. This algorithm builds a sparse 'normal profile' of extreme behaviours, to be confronted with new (possibly abnormal) extreme observations. Illustrative experimental results provide strong empirical evidence of the relevance of our approach.

Universal Anomaly Detection: Algorithms and Applications
Shachar Siboni, Asaf Cohen

Modern computer threats are far more complicated than those seen in the past. They are constantly evolving, altering their appearance, perpetually changing disguise. Under such circumstances, detecting known threats, a fortiori zero-day attacks, requires new tools, which are able to capture the essence of their behavior, rather than some fixed signatures. In this work, we propose novel universal anomaly detection algorithms, which are able to learn the normal behavior of systems and alert for abnormalities, without any prior knowledge on the system model, nor any knowledge on the characteristics of the attack. The suggested method utilizes the Lempel-Ziv universal compression algorithm in order to optimally give probability assignments for normal behavior (during learning), then estimate the likelihood of new data (during operation) and classify it accordingly. The suggested technique is generic, and can be applied to different scenarios. Indeed, we apply it to key problems in computer security. The first is detecting Botnets Command and Control (C&C) channels. A Botnet is a logical network of compromised machines which are remotely controlled by an attacker using a C&C infrastructure, in order to perform malicious activities. We derive a detection algorithm based on timing data, which can be collected without deep inspection, from open as well as encrypted flows. We evaluate the algorithm on real-world network traces, showing how a universal, low complexity C&C identification system can be built, with high detection rates and low false-alarm probabilities. Further applications include malicious tools detection via system calls monitoring and data leakage identification.

Anomaly Detection for malware identification using Hardware Performance Counters by Alberto Garcia-Serrano

Computers are widely used today by most people. Internet based applications, like ecommerce or ebanking attracts criminals, who using sophisticated techniques, tries to introduce malware on the victim computer. But not only computer users are in risk, also smartphones or smartwatch users, smart cities, Internet of Things devices, etc. Different techniques has been tested against malware. Currently, pattern matching is the default approach in antivirus software. Also, Machine Learning is successfully being used. Continuing this trend, in this article we propose an anomaly based method using the hardware performance counters (HPC) available in almost any modern computer architecture. Because anomaly detection is an unsupervised process, new malware and APTs can be detected even if they are unknown.

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, July 14, 2015

Video: Compressive Hyperspectral Imaging via Approximate Message Passing

As New Horizons flies-by Pluto today, at a speed of 16+ km/s there will be a short window of opportunity for the spacecraft to perform the most accurate images of this planet before it continues its journey to the Kuyper belt (the speed of the spacecraft makes it impossible to orbit Pluto).




Images like the one above are taken by LORRI and are black and white (panchromatic) but the instrument that will provide much of the science data for this Pluto encounter will be Ralph 

...Ralph consists of three panchromatic (black-and-white) and four color imagers inside its Multispectral Visible Imaging Camera (MVIC), as well as an infrared compositional mapping spectrometer called the Linear Etalon Imaging Spectral Array (LEISA). LEISA is an advanced, miniaturized short-wavelength infrared (1.25-2.50 micron) spectrometer provided by scientists from NASA’s Goddard Space Flight Center. MVIC operates over the bandpass from 0.4 to 0.95 microns. Ralph’s suite of eight detectors – seven charge-coupled devices (CCDs) like those found in a digital camera, and a single infrared array detector – are fed by a single, sensitive magnifying telescope with a resolution more than 10 times better than the human eye can see. The entire package operates on less than half the wattage of an appliance light bulb.
 More detailed on this camera can be found here.

All this to say, that any improvement on obtaining hyperspectral data, such as the one provided by Ralph  during the fly-by, coupled with compression from cheap (powerwise) hardware could eventually be very useful to future space missions (please note the 6.3 watts power use of the camera). It so happens that in compressive sensing, we have the beginning of an answer as exemplified by the hardware in the CASSI imager (many of the blog entries relating to Hyperspectral imaging and ompressive sensing can be found under this tag.)

 Today, Dror and colleagues show us how to reconstruct hyperspectral images when they are taken by these compressive imagers using AMP solvers. Here is the tutorial video made by Jin Tan and Yanting Ma  followed by their preprint:




Compressive Hyperspectral Imaging via Approximate Message Passing by  Jin Tan, Yanting Ma, Hoover Rueda, Dror Baron, Gonzalo Arce

We consider a compressive hyperspectral imaging reconstruction problem, where three-dimensional spatio-spectral information about a scene is sensed by a coded aperture snapshot spectral imager (CASSI). The CASSI imaging process can be modeled as suppressing three-dimensional coded and shifted voxels and projecting these onto a two-dimensional plane, such that the number of acquired measurements is greatly reduced. On the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. We previously proposed a compressive imaging reconstruction algorithm that is applied to two-dimensional images based on the approximate message passing (AMP) framework. AMP is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. We employed an adaptive Wiener filter as the image denoiser, and called our algorithm "AMP-Wiener." In this paper, we extend AMP-Wiener to three-dimensional hyperspectral image reconstruction. Applying the AMP framework to the CASSI system is challenging, because the matrix that models the CASSI system is highly sparse, and such a matrix is not suitable to AMP and makes it difficult for AMP to converge. Therefore, we modify the adaptive Wiener filter to fit the three-dimensional image denoising problem, and employ a technique called damping to solve for the divergence issue of AMP. Our simulation results show that AMP-Wiener in three-dimensional hyperspectral imaging problems outperforms existing widely-used algorithms such as gradient projection for sparse reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST) given the same amount of runtime. Moreover, in contrast to GPSR and TwIST, AMP-Wiener need not tune any parameters, which simplifies the reconstruction process.
 
Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 26, 2015

Self-Dictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search are Related - implementation -

MMV as a way to perform unmixing of hyperspectral imaging


Self-Dictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search are Related by Xiao Fu, Wing-Kin Ma, Tsung-Han Chan, José M. Bioucas-Dias

This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise---including its identification of the (unknown) number of endmembers---under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments.  
 
an implementation is on Tsung-Han code page.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly