Pages

Saturday, December 30, 2017

Lectures on Randomized Numerical Linear Algebra by Petros Drineas and Michael Mahoney



Here is a welcome addition to the Randomized Numerical linear Algebra / RandNLA tag.

Lectures on Randomized Numerical Linear Algebra by Petros Drineas, Michael W. Mahoney

This chapter is based on lectures on Randomized Numerical Linear Algebra from the 2016 Park City Mathematics Institute summer school on The Mathematics of Data.
Here is the table of content:

1 Introduction 
2 Linear Algebra 
2.1 Basics
2.2 Norms
2.3 Vector norms
2.4 Induced matrix norms
2.5 The Frobenius norm
2.6 The Singular Value Decomposition
2.7 SVD and Fundamental Matrix Spaces
2.8 Matrix Schatten norms
2.9 The Moore-Penrose pseudoinverse
2.10 References
3 Discrete Probability
3.1 Random experiments: basics
3.2 Properties of events
3.3 The union bound
3.4 Disjoint events and independent events
3.5 Conditional probability
3.6 Random variables
3.7 Probability mass function and cumulative distribution function
3.8 Independent random variables
3.9 Expectation of a random variable
3.10 Variance of a random variable
3.11 Markov’s inequality
3.12 The Coupon Collector Problem
3.13 References
4 Randomized Matrix Multiplication
4.1 Analysis of the RANDMATRIXMULTIPLY algorithm
4.2 Analysis of the algorithm for nearly optimal probabilities
4.3 Bounding the two norm
4.4 References
5 RandNLA Approaches for Regression Problems
5.1 The Randomized Hadamard Transform
5.2 The main algorithm and main theorem
5.3 RandNLA algorithms as preconditioners
5.4 The proof of Theorem 47
5.5 The running time of the RANDLEASTSQUARES algorithm
5.6 References
6 A RandNLA Algorithm for Low-rank Matrix Approximation
6.1 The main algorithm and main theorem
6.2 An alternative expression for the error
6.3 A structural inequality
6.4 Completing the proof of Theorem 80
6.4.1 Bounding Expression (104)
6.5 Running time
6.6 References





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 29, 2017

Job: Two Postdocs: Large-scale, storage-optimal continuous optimization / Convex relaxations for discrete optimization problems, EPFL, Switzerland

Volkan just sent me the following
Dear Igor,

It has been a while. ...May I also advertise some postdoc positions at nuit-blanche as well? I would really appreciate the shoutout... 


Sure Volkan, here they are: 


ERC funded positions (2) for postdoctoral researcher
Large-scale, storage-optimal continuous optimization
Convex relaxations for discrete optimization problems

Laboratory for Information and Inference Systems (LIONS)
Ecole Polytechnique Fédérale de Lausanne (EPFL)
lions.epfl.ch

The LIONS group is looking for postdoctoral candidates with solid experience in developing optimization theory and algorithms. Knowledge of continuous relaxations for discrete submodular optimization problems is a big plus.

The ideal candidate should have a research profile in applied mathematics, computer science, electrical engineering, or a related field. The initial appointment is for 1 year, and can be extended up to three years.

Candidates should send their CV, a research statement outlining their expertise and interests, any supplemental information, and a list of at least three references with full contact information to the LIONS Lab Administrator:
Gosia Baltaian (gosia.baltaian@epfl.ch)
Review of applications begins immediately, and continues until the position is filled. Short-listed candidates may be invited for an interview.

For our research interests, please further see

https://lions.epfl.ch/research



----
best,
-------
Prof. Volkan Cevher
Laboratory for Information and Inference Systems
http://lions.epfl.ch





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 28, 2017

Job: Internship position (Spring/Summer 2018 + PhD position follow-up), IFPEN, France

Laurent just sent me the following the other day:


Dear Igor


Following the high quality applications (and practical results in only 3 months) after your post on:
http://nuit-blanche.blogspot.fr/2016/12/internship-signal-and-image.html
let me share with you the follow-up internship (5 month):


Analysis and prediction of wavelet and filter-bank frames performance for machine learning 
and full PDF description 
with a subsequent PhD proposal (Data characterization and classification with invariant multiscale features)


Objectives and context follow (more at the webpage)


We wish to study large datasets of experimental data (e.g. physico-chemical spectral signals, microscopy or geophysical subsurface images) toward clustering, classification and learning. When data satisfy regularity properties, they often admit sparse or compressible representations in a judicious transformed domain: a few transformed coefficients provide accurate data approximation. Such representations, like multiscale or wavelet transforms, are beneficial to subsequent processing, and they form the core of novel data processing methodologies, such as Scattering networks/transforms (SN) or Functional Data Analysis (FDA). Due to the variety of such transforms, without prior knowledge, it is not evident to find the most suitable representation for a given set of data. The aim of this subject is to investigate potential relations between transform properties and data compressibility on the one hand, and classification/clustering performance on the other hand, especially with respect to the robustness to shifts/translations or noise in data features, with matters in experimental applications. Rooting on a recent work, the first objective is to develop a framework to allow the use of different sparsifying transformations (bases or frames of wavelets and multiscale transformations) at the input of reference SN algorithms. This will permit to evaluate the latter on a variety of experimental datasets, with the aim of choosing the most appropriate, both in terms of performance and usability, since the redundancy in transformations may hinder their application to large datasets. A particular interest could be laid on complex-like transformations, that may improve either the sparsification or ”invariance properties” in the transformed data. Their importance has been underlined recently for deep convolutional networks. Then, starting from real data, the trainee will develop realistic models reproducing the expected behaviors in the data, for instance related to shifts or noise. Finally, the relative clustering/classification performances will be assessed with respect to different trans- formation choices, and their impact on both realistic models and real data. A particular interest could be laid on either transform properties (redundancy, frame bounds, asymptotic properties) or the resulting data multiscale statistics.


Hoping you can still disclose this information with your venture at http://www.lighton.io/


Best


Sure Laurent ! The more qualified people around Paris, the better (a tide lifts all boats).


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Saturday, December 23, 2017

Saturday Morning Videos: Machine Learning Summer School, Max Planck Institute for Intelligent Systems, Tübingen, Germany

The organizers (Ruth Urner, Michael Hirsch, Ilya Tolstikhin and Bernhard Schölkopf) of the The Machine Learning Summer School that took place in June 2017 at the Max Planck Institute for Intelligent Systems, Tübingen, Germany, just released the videos of some of the talks. Here they are along with some of the slides:
The full Youtube playlist is here.

h/t Danilo


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 21, 2017

Program / Streaming: The Future of Random Projections : a mini-workshop






Florent Krzakala and I are organizing a mini-workshop on The Future of Random Projections tomorrow.

Streaming will be live here:



Random Projections have proven useful in many areas ranging from signal processing to machine learning. In this informal mini-workshop, we aim to bring together researchers from different areas to discuss this exciting topic and its future use in the area of big data, heavy computations, and Deep Learning. 
 
Where and When:

Friday, December 22, 9:30am to 12:30pm (Paris time)
Amphitheatre A, IPGG, 6 rue Jean Calvin

the program:
  • Romain Couillet (Centrale Supelec), A Random Matrix Approach to Random Feature Maps
  • In this talk, I will discuss how advanced tools from random matrix theory allow to better understand and improve the large dimensional statistics of many standard machine learning methods, and in particular non-linear random feature maps. We will notably show that the performance of extreme learning machines (that can be seen as mere ridge-linear regression of non-linear RFMs) is easily understood, particularly so when the input data arise from a mixture model.
    Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. An increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small.
    In this talk, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond to linear measurements of the underlying probability distribution of the data, and the estimation problem is analyzed under the lens of Compressive Sensing (CS). We extend CS results to our infinite-dimensional framework, give generic conditions for successful estimation and apply them analysis to many problems, with a focus on mixture models estimation. We base our method on the construction of random sketching operators, using kernel mean embeddings and random features, such that some Restricted Isometry Property (RIP) condition holds in the Banach space of finite signed measures, with high probability, for a number of random features that only depends on the complexity of the problem. We also describe a flexible heuristic greedy algorithm to estimate mixture models from a sketch, and apply it on synthetic and real data.

  • Gael Varoquaux (INRIA), Recursive nearest agglomeration (ReNA): fast clustering for approximation of structured signals

  • In this work, we revisit fast dimension reduction approaches, as with random projections and random sampling. Our goal is to summarize the data to decrease computational costs and memory footprint of subsequent analysis. Such dimension reduction can be very efficient when the signals of interest have a strong structure, such as with images. We focus on this setting and investigate feature clustering schemes for data reductions that capture this structure. An impediment to fast dimension reduction is that good clustering comes with large algorithmic costs. We address it by contributing a linear-time agglomerative clustering scheme, Recursive Nearest Agglomeration (ReNA). Unlike existing fast agglomerative schemes, it avoids the creation of giant clusters. We empirically validate that it approximates the data as well as traditional variance-minimizing clustering schemes that have a quadratic complexity. In addition, we analyze signal approximation with feature clustering and show that it can remove noise, improving subsequent analysis steps. As a consequence, data reduction by clustering features with ReNA yields very fast and accurate models, enabling to process large datasets on budget. Our theoretical analysis is backed by extensive experiments on publicly-available data that illustrate the computation efficiency and the denoising properties of the resulting dimension reduction scheme.
  • Arthur Mensch (INRIA) Stochastic Subsampling for Factorizing Huge Matrices
  • We present a matrix-factorization algorithm that scales to input matrices with both huge number of rows and columns. Learned factors may be sparse or dense and/or nonnegative, which makes our algorithm suitable for dictionary learning, sparse component analysis, and nonnegative matrix factorization. Our algorithm streams matrix columns while subsampling them to iteratively learn the matrix factors. At each iteration, the row dimension of a new sample is reduced by subsampling, resulting in lower time complexity compared to a simple streaming algorithm. Our method comes with convergence guarantees to reach a stationary point of the matrix-factorization problem. We demonstrate its efficiency on massive functional magnetic resonance imaging data (2 TB), and on patches extracted from hyperspectral images (103 GB). For both problems, which involve different penalties on rows and columns, we obtain significant speed-ups compared to state-of-the-art algorithms. 

Sunday, December 17, 2017

The Future of Random Projections : A mini-workshop, ( December 22 )

Because this period of the year is magic (see Sunday Morning Insight: And You, What Are You Waiting For ?), Florent and I decided to organize a mini-workshop on Random Projections on Friday not far from where the Curies made their discoveries. Here is the announcement. Please note that we are ok with remote presentations.



The Future of Random Projections : a mini-workshop
----------------------------------------------------------------------
Random Projections have proven useful in many areas ranging from signal processing to machine learning. In this informal mini-workshop, we aim to bring together researchers from different areas to discuss this exciting topic and its future use in the area of big data, heavy computations, and Deep Learning.  
Where and When:
  • Friday, December 22, 9:00am to 12:30pm (Paris time)

Confirmed Speakers:


If you want to join or to give a talk, please answer to the mail. Since this is a short notice, we can also accept skype contributions.

Florent Krzakala and Igor Carron
The schedule for the workshop will be on Nuit Blanche on Thursday.

Credit image: Rich Baraniuk


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Saturday, December 16, 2017

Saturday Morning Video: Petascale Deep Learning on a Single Chip Speaker: Tapabrata Ghosh, Vathys


Tapa Ghosh of Vathys.ai is presenting a new technology that aims at dealing with the one bottelneck few people in the hardware for AI are not focusing on: Data movement. Give this man his funding !
(Unrelated: At LightOn, we solve the data movement in a different fashion)




Vathys.ai is a deep learning startup that has been developing a new deep learning processor architecture with the goal of massively improved energy efficiency and performance. The architecture is also designed to be highly scalable, amenable to next generation DL models. Although deep learning processors appear to be the "hot topic" of the day in computer architecture, the majority (we argue all) of such designs incorrectly identify the bottleneck as computation and thus neglect the true culprits in inefficiency; data movement and miscellaneous control flow processor overheads. This talk will cover many of the architectural strategies that the Vathys processor uses to reduce data movement and improve efficiency. The talk will also cover some circuit level innovations and will include a quantitative and qualitative comparison to many DL processor designs, including the Google TPU, demonstrating numerical evidence for massive improvements compared to the TPU and other such processors. 

h/t Iacopo and Reddit 




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 15, 2017

CfP: LVA/ICA 2018 14th International Conference on Latent Variable Analysis and Signal Separation July 2-6, 2018, University of Surrey, Guildford, UK

Mark just sent me the following:

Dear Igor,
The forthcoming LVA/ICA 2018 international conference on Latent Variable Analysis and Signal Separation may be of interest to many Nuit Blanche readers, particularly those working on sparse coding or dictionary learning for source separation. The submission deadline is approaching! Please see below for the latest Call for Papers.
Best wishes, Mark
Here it is:

====================================
= LVA/ICA 2018 - CALL FOR PAPERS ==
14th International Conference on Latent Variable Analysis and Signal Separation
July 2-6, 2018
University of Surrey, Guildford, UK
Paper submission deadline: January 15, 2018
====================================

The International Conference on Latent Variable Analysis and Signal Separation, LVA/ICA 2018, is an interdisciplinary forum where researchers and practitioners can experience a broad range of exciting theories and applications involving signal processing, applied statistics, machine learning, linear and multilinear algebra, numerical analysis and optimization, and other areas targeting Latent Variable Analysis problems.
We are pleased to invite you to submit research papers to the 14th LVA/ICA which will be held at the University of Surrey, Guildford, UK, from the 2nd to the 6th of July, 2018. The conference is organized by the Centre for Vision, Speech and Signal Processing (CVSSP); and the Institute of Sound Recording (IoSR).
The proceedings will be published in Springer-Verlag's Lecture Notes in Computer Science (LNCS).

== Keynote Speakers ==
- Orly Alter
Scientific Computing & Imaging Institute and Huntsman Cancer Institute, University of Utah, USA
- Andrzej Cichocki
Brain Science Institute, RIKEN, Japan
- Tuomas Virtanen
Laboratory of Signal Processing
Tampere University of Technology, Finland

== Topics ==
Prospective authors are invited to submit original papers (8-10 pages in LNCS format) in areas related to latent variable analysis, independent component analysis and signal separation, including but not limited to:
- Theory:
* sparse coding, dictionary learning
* statistical and probabilistic modeling
* detection, estimation and performance criteria and bounds
* causality measures
* learning theory
* convex/nonconvex optimization tools
* sketching and censoring for large scale data
- Models:
* general linear or nonlinear models of signals and data
* discrete, continuous, flat, or hierarchical models
* multilinear models
* time-varying, instantaneous, convolutive, noiseless, noisy,
over-complete, or under-complete mixtures
* Low-rank models, graph models, online models
- Algorithms:
* estimation, separation, identification, detection, blind and
semi-blind methods, non-negative matrix factorization, tensor
decomposition, adaptive and recursive estimation
* feature selection
* time-frequency and wavelet based analysis
* complexity analysis
* Non-conventional signals (e.g. graph signals, quantum sources)
- Applications:
* speech and audio separation, recognition, dereverberation and
denoising
* auditory scene analysis
* image segmentation, separation, fusion, classification, texture
analysis
* biomedical signal analysis, imaging, genomic data analysis,
brain-computer interface
- Emerging related topics:
* sparse learning
* deep learning
* social networks
* data mining
* artificial intelligence
* objective and subjective performance evaluation

== Venue ==
LVA/ICA 2018 will be held at the University of Surrey, Guildford, in the South East of England, UK. The university is a ten minute walk away from the town centre, which offers a vibrant blend of entertainment, culture and history. Guildford is 40 minutes from London by train, and convenient for both London Heathrow and London Gatwick airports.

== Conference Chairs ==
- General Chairs:
Mark Plumbley - University of Surrey, UK
Russell Mason - University of Surrey, UK
- Program Chairs
Sharon Gannot - Bar-Ilan University, Israel
Yannick Deville - Université Paul Sabatier Toulouse 3, France

== Important Dates ==
- Paper submission deadline: January 15, 2018
- Notification of acceptance: March 19, 2018
- Camera ready submission: April 16, 2018
- Summer School: July 2, 2018
- Conference: July 3-6, 2018

== Website ==
For further information and information on how to submit, please visit:

We look forward to your participation,
The LVA/ICA 2018 Organizing Committee
===============================
--
Prof Mark D Plumbley
Professor of Signal Processing
Centre for Vision, Speech and Signal Processing (CVSSP)
University of Surrey, Guildford, Surrey, GU2 7XH, UK




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 14, 2017

CSjob: Multimedia / Research Scientist or Principal Research Scientist - Signal Processing, MERL, Massachusetts, USA

Petros just sent me the following:

Dear Igor, 
I hope you are doing well. We are excited to have a new opening in the Computational Sensing Team at MERL. I would appreciate it if you can post this on your blog, or otherwise disseminate as you see fit, and encourage anyone you think might be a good candidate to apply. Posting and application link is also here: http://www.merl.com/employment/employment.php#MM29
Thanks!
Petros
Sure Petros, here is the job ad:



MM29 - Multimedia / Research Scientist or Principal Research Scientist - Signal Processing
MERL's Computational Sensing Team is seeking an exceptional researcher in the area of signal processing, with particular emphasis on signal acquisition and active sensing technologies. Applicants are expected to hold a Ph.D. degree in Electrical Engineering, Computer Science, or a closely related field.
The successful candidate will have an extensive signal processing background and familiarity with related techniques, such as compressive sensing and convex optimization. Specific experience with wave propagation or PDE constrained inverse problems, or with signal acquisition via ultrasonic, radio, optical or other sensing or imaging modalities, is a plus. Applicants must have a strong publication record in any of these or related areas, demonstrating novel research achievements.
As a member of our team, the successful candidate will conduct original research that aims to advance state-of-the-art solutions in the field, with opportunities to work on both fundamental and application-motivated problems. Your work will involve initiating new projects with long-term research goals and leading research efforts.
MERL is one of the most academically-oriented industrial research labs in the world, and the ideal environment to thrive as a leader in signal processing. MERL strongly supports, encourages, and values academic activities such as publishing and presenting research results at top conferences, collaborating with university professors and students, organizing workshops and challenges, and generally maintaining an influential presence in the scientific community.







Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 13, 2017

Ce soir Paris Machine Learning Meetup #4 Season 5: K2, Datathon ICU, Scikit-Learn, Multimedia fusion, Private Machine Learning



So today is Paris Machine Learning Meetup #4, Season 5. Wow ! Thanks to Invivoo for sponsoring this meetup (food and drinks afterwards) and especially thanks for give us this awesome place!


The video streaming is here:


Capacity := +/- 170 seats / First-come-first-serve / then doors close

Schedule :

6:45PM doors open / 7-9:00PM talks / 9-10:00PM drinks/foods / 10:00PM end



Gael Varoquaux (INRIA), Some new and cool things in Scikit-Learn

An update on the scikit-learn project: new and ongoing features, code improvements, and ecosystem.

Nhi Tran (Invivoo), Multimedia fusion for information retrieval and classification

“Multimodal information fusion is a core part of various real-world multimedia applications. Image and text are two of the major modalities that are being fused and have been receiving special attention from the multimedia community. This talk focuses on the joint modelling of image and text by learning a common representation space for these two modalities. Such a joint space can be used to address the image/text retrieval and classification applications.”

Morten Dahl, Private Machine Learning

By mixing machine learning with cryptographic tools such as homomorphic encryption we may hope to for instance train model on sensitive data previously out of reach. Although still maturing, in this talk we will look at some of these techniques and how they were applied to a few concrete use cases.

Hardware: The War for AI supremacy, Recent developments, a short reviewIgor Carron, LightOn.io

This is a small review of the recent hardware development in Machine Learning/Deep Learning.

Tuesday, December 12, 2017

The Case for Learned Index Structures

Here is a different kind of The Great Convergence: when neural networks go after data structures, (hashes, etc....) and eventually database systems....




Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. The key idea is that a model can learn the sort order or structure of lookup keys and use this signal to effectively predict the position or existence of records. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, December 11, 2017

Compressive 3D ultrasound imaging using a single sensor

Pieter just sent me the following:

Dear Igor,
I have been following your blog for a couple of years now as it served as an excellent introduction to the field of CS and an active source of inspiration for new ideas. Many thanks for that! It was a quite a journey, but finally we managed to get some form of CS working in the field of ultrasound imaging. In our paper (online today:http://advances.sciencemag.org/content/3/12/e1701423, and a short video about this work: https://www.youtube.com/watch?v=whbbaF1nT4A ) we show that 3D ultrasound imaging can be done using only one sensor and a simple coding mask. Unfortunately we do not show any phase transition map and there is not much exploitation of sparsity but it does show that hardware prototyping and the utilisation of signal structure in conjunction with linear algebra can reveal powerful, new ways of imaging.
It would mean a lot to me (a long-held dream) if you could mention our paper on your blog some time.


Kind regards,
Pieter Kruizinga
Awesome Pieter !



Compressive 3D ultrasound imaging using a single sensor by Pieter Kruizinga, Pim van der Meulen, Andrejs Fedjajevs, Frits Mastik, Geert Springeling, Nico de Jong, Johannes G. Bosch and Geert Leus

Three-dimensional ultrasound is a powerful imaging technique, but it requires thousands of sensors and complex hardware. Very recently, the discovery of compressive sensing has shown that the signal structure can be exploited to reduce the burden posed by traditional sensing requirements. In this spirit, we have designed a simple ultrasound imaging device that can perform three-dimensional imaging using just a single ultrasound sensor. Our device makes a compressed measurement of the spatial ultrasound field using a plastic aperture mask placed in front of the ultrasound sensor. The aperture mask ensures that every pixel in the image is uniquely identifiable in the compressed measurement. We demonstrate that this device can successfully image two structured objects placed in water. The need for just one sensor instead of thousands paves the way for cheaper, faster, simpler, and smaller sensing devices and possible new clinical applications. 




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Wednesday, December 06, 2017

Monday, December 04, 2017

Nuit Blanche in Review (October and November 2017)

It's been two months since the last Nuit Blanche in Review (September 2017). We've had two Paris Machine Learning meetups, a two-day meeting of France is AI. Nuit Blanche featured two theses, a few job postings. While NIPS 2017 is about to start. I also recall last year's NIPS in Barcelona, where there was a sense that the community would move in on other areas besides computer vision. From some general takeaways from #NIPS2016
  • With the astounding success of Deep Learning algorithms, other communities of science have essentially yielded to these tools in a manner of two or three years. I felt that the main question at the meeting was: which field would be next ? Since the Machine Learning/Deep Learning community was able to elevate itself thanks to high quality datasets such as MNIST all the way to Imagenet, it is only fair to see where this is going with the release of a few datasets during the conference including the Universe from OpenAI. Control systems and simulators (forward problems in science) seem the next target.
Well, if you take a look at the few papers of this past two months mentioned here on Nuit Blanche, it looks like GANs and other methods have essentially made their way into the building of recovery solvers: i.e. algorithms dedicated to build images/data back from measurements. The recent interest in the development of Deep Learning for physics  makes it likely we will soon build better sensing hardware. 

Another interesting item to us at LightOn this past month is the realization that Biologically Inspired Random Projections is a thing. 

Enjoy the postings.


Implementation

In-depth
Hardware
Thesis
Meetup
Videos and slides:
CfP
Job:


credit: NASA / JPL / Ricardo Nunes