Wednesday, December 11, 2019

Ce Soir: Paris Machine Learning Meetup #2 Season 7: Symbolic maths, Data Generation thru GAN, "Prevision Retards" @SNCF, Retail and AI, Rapids.ai Leveraging GPUs

** Nuit Blanche is now on Twitter: @NuitBlog **


A big thank you to Scaleway for hosting us in their inspiring office and sponsoring the networking event afterwards.


So this is quite exciting. Our meetup group has 7 999 members and we are going to organize a meetup in a town that is paralyzed by strikes. During the course of existence of this meetup, we have seen worse.  

For those of you who will not be able to make it, all information slides and link to streaming are below:


Tabular data are the most common within companies. Generating synthetic data that respects the statistical properties of the original data can have several applications: a machine learning that respects data privacy, improving the robustness of a model in relation to data drift, etc. Since 2018, there has been an increasing number of academic publications presenting the use of GANs on this type of data, particularly on patient medical data. We have performed a proof of concept on real data, and present the results of several models from the research, namely the Wasserstein GAN, the Wasserstein GAN with Gradient Penalty and the Cramér-GAN, with the objective of "model compatibility", i.e. the possibility of using synthetic data to replace real data to train a classifier.

2. Eloïse Nonne, Soumaya Ihihi, "Prévisions Retards" a Machine Learning project led by e.SNCF's Data IoT team.
Its goal is to integrate predictions of train delays into the SNCF mobile application. Every day, our model predicts delays for the next 7 days, at each stop, for every train in Paris area network. The challenge of this project is to improve the reliability of passenger information and to provide more relevant routes for the application users. We will present the project, from the definition of needs and exploratory data analysis, to its industrialization in the cloud and the reliability of its predictions.

This talk is focussed on AI and ML applications in retail. Discover how Carrefour is transforming through the introduction of the Google - Carrefour Lab by Elina Ashkinazi-Ildis, Director of the Lab. Then go further with the "shelf out detection" usecase presented by Kasra Mansouri, Data Scientist within Artefact.

RAPIDS makes it possible to have end-to-end data science pipelines run entirely on GPU architecture. It capitalizes on the parallelization capabilities of GPUs to accelerate data preprocessing pipelines, with a pandas-like dataframe syntax. GPU-optimized versions of scikit-learn algorithms are available, and RAPIDS also integrates with major deep learning frameworks.
This talk will present RAPIDS and its capabilities, and how to integrate it in your pipelines.


Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be **surprisingly good** at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica.
https://arxiv.org/abs/1912.01412



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, November 13, 2019

Paris Machine Learning Meetup #1 Season 7: Neuroscience & AI, Time series, Deep Transfert learning in NLP, Media Campaign, Energy Forecasting

** Nuit Blanche is now on Twitter: @NuitBlog **


A big thank-you to Publicis Sapient for welcoming us to their inspiring office. Presentation slides will be available here. The streaming of the event can also be found here:






Publicis Sapient is a digital transformation partner helping companies and established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. Within Publicis Sapient, the Data Science Team builds machine learning products in order to support clients in their transformation.

Margot Fournier, Publicis Sapient. Classification of first time visitors
A significant share of visitors on a site do not return, making it crucial to identify levers that can decrease bouncing rate. For a client in the retail sector, we developed several models that are able to predict both the gender and the segment in which unlogged and unknown visitors fit in. This allows to personalize the experience from the first visit and prevent users from bouncing.

Maxence Brochard, Publicis Sapient. Media campaign optimization
Internet users leave multiple traces of micro-conversions (searches, clicks, whishlist...) during their visit on an ecommerce site: these micro-conversions can be weak signals of an act of purchase in the near future. To analyze those signals, we built a solution to detect visitors that are likely to convert and target them in while optimizing media campaigns budgets.




The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace.In https://bit.ly/2WLsMaQ, the authors argue that better understanding biological brains could play a vital role in building intelligent machines. They survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. Finally they conclude by highlighting shared themes that may be key for advancing future research in both fields.




The recent M4 forecasting competition (https://www.mcompetitions.unic.ac.cy) has demonstrated that the use of one forecasting method alone is not the most efficient approach in terms of forecasting accuracy. In this talk, I will focus on an energy consumption forecasting use case integrating exogenous data such as weather conditions and open data. In particular, I will present a forecasting time series challenge and the best practices observed on the best submissions and showcase an interesting approach based on a combination of classical statistical forecasting methods and machine learning algorithms, such as gradient boosting, for increased performance. Generalizing the use of these methods can be a major help to address the challenge of electricity demand and production adjustment.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Saturday, November 09, 2019

Paris Machine Learning Meetup Hors Série #1: A Talk with François Chollet Hors série with François Chollet, (Creator of the Keras Library)

** Nuit Blanche is now on Twitter: @NuitBlog **


The first Paris Machine Learning Meetup Hors Série #1 of the season is a Talk with François Chollet Hors série with Francois Chollet, (Creator of the Keras Library).

This event is being recorded.


We thank Morning coworking for hosting us and LightOn for their support in organizing this event. 

Today, we welcome Francois Chollet. François is a researcher at Google and creator of the Keras Deep Learning library (https://keras.io). He will talk to us about the new features of the TensorFlow library as well as give us some insight of the latest in Deep Learning Research.

Schedule :
  • 2pm : Keras & TensorFlow for Deep Learning
  • 2.30pm : Q&A
  • 2.40pm : Latest research in Deep Learning
  • 2.50pm : Q&A
  • 3pm : networking

nb :
1/ this is **not** a coding session
2/ this event does not include a buffet (drink, food)


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Friday, November 01, 2019

Videos: IMA Computational Imaging Workshop, October 14 - 18, 2019

** Nuit Blanche is now on Twitter: @NuitBlog ** 



Stanley ChanJeff FesslerJustin HaldarUlugbek KamilovSaiprasad RavishankarRebecca WillettBrendt Wohlberg just organized a workshop at IMA on computational imaging. Short story as this blog just passed the 8 million page views. Understanding of Compressed sensing was in large part, at least by looking at the stats:hits on this blog, due to an IMA meeting on the subject and the fact that people could watch the videos afterward. Hoping for this workshop to follow the same path. Given the amount of ML in it, I wonder if it shouldn't have been called TheGreatConvergence meeting:-)


This workshop will serve as a venue for presenting and discussing recent advances and trends in the growing field of computational imaging, where computation is a major component of the imaging system. Research on all aspects of the computational imaging pipeline from data acquisition (including non-traditional sensing methods) to system modeling and optimization to image reconstruction, processing, and analytics will be discussed, with talks addressing theory, algorithms and mathematical techniques, and computational hardware approaches for imaging problems and applications including MRI, tomography, ultrasound, microscopy, optics, computational photography, radar, lidar, astronomical imaging, hybrid imaging modalities, and novel and extreme imaging systems. The expanding role of computational imaging in industrial imaging applications will also be explored.
Given the rapidly growing interest in data-driven, machine learning, and large-scale optimization based methods in computational imaging, the workshop will partly focus on some of the key recent and new theoretical, algorithmic, or hardware (for efficient/optimized computation) developments and challenges in these areas. Several talks will focus on analyzing, incorporating, or learning various models including sparse and low-rank models, kernel and nonlinear models, plug-and-play models, graphical, manifold, tensor, and deep convolutional or filterbank models in computational imaging problems. Research and discussion of methods and theory for new sensing techniques including data-driven sensing, task-driven imaging optimization, and online/real-time imaging optimization will be encouraged. Discussion sessions during the workshop will explore the theoretical and practical impact of various presented methods and brainstorm the main challenges and open problems.
The workshop aims to encourage close interactions between mathematical and applied computational imaging researchers and practitioners, and bring together experts in academia and industry working in computational imaging theory and applications, with focus on data and system modeling, signal processing, machine learning, inverse problems, compressed sensing, data acquisition, image analysis, optimization, neuroscience, computation-driven hardware design, and related areas, and facilitate substantive and cross-disciplinary interactions on cutting-edge computational imaging methods and systems.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Thursday, October 10, 2019

Deep Compressed Sensing -implementation-

** Nuit Blanche is now on Twitter: @NuitBlog **

As promised back in May, the implementation of the Deep Compressed Sensing paper is now available.
Hi,
Thank you for your interest and your wait. Now the code accompanying our ICML paper is available at: https://github.com/deepmind/deepmind-research/tree/master/cs_gan
Best wishes,
Yan


Compressed sensing (CS) provides an elegant framework for recovering sparse signals from compressed measurements. For example, CS can exploit the structure of natural images and recover an image from only a few random measurements. CS is flexible and data efficient, but its application has been restricted by the strong assumption of sparsity and costly reconstruction process. A recent approach that combines CS with neural network generators has removed the constraint of sparsity, but reconstruction remains slow. Here we propose a novel framework that significantly improves both the performance and speed of signal recovery by jointly training a generator and the optimisation process for reconstruction via meta-learning. We explore training the measurements with different objectives, and derive a family of models based on minimising measurement errors. We show that Generative Adversarial Nets (GANs) can be viewed as a special case in this family of models. Borrowing insights from the CS perspective, we develop a novel way of improving GANs using gradient information from the discriminator.
Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup
About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Wednesday, October 09, 2019

Bayesian Inference with Generative Adversarial Network Priors

** Nuit Blanche is now on Twitter: @NuitBlog **

Dhruv let me know of the following

Hi Igor,
I hope you're doing well. Thanks for posting latest articles and relevant information on your blog. I'm a regular reader of it and really enjoy it.
Just wanted to share with you one of our recent work on Bayesian inference using Generative Adversarial Network priors (https://arxiv.org/abs/1907.09987). In the paper, we demonstrate the effectiveness of this approach (in learning better priors and efficient posterior sampling) for a physics-based inverse problem, but I think similar idea can be applied to compressive sensing and any other inverse problems and uncertainty quantification task. So, I thought it might be of interest to your community and thought of sharing with you just in case if you would like to share it.

Best,
Dhruv

Thanks Dhruv !




Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a physical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to represent mathematically. In this manuscript we consider the use of Generative Adversarial Networks (GANs) in addressing these challenges. A GAN is a type of deep neural network equipped with the ability to learn the distribution implied by multiple samples of a given field. Once trained on these samples, the generator component of a GAN maps the iid components of a low-dimensional latent vector to an approximation of the distribution of the field of interest. In this work we demonstrate how this approximate distribution may be used as a prior in a Bayesian update, and how it addresses the challenges associated with characterizing complex prior distributions and the large dimension of the inferred field. We demonstrate the efficacy of this approach by applying it to the problem of inferring and quantifying uncertainty in the initial temperature field in a heat conduction problem from a noisy measurement of the temperature at later time.

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Tuesday, September 03, 2019

Nuit Blanche in Review (July-August 2019)

** Nuit Blanche is now on Twitter: @NuitBlog **

Landing in Oxia Planum, NASA/JPL/University of Arizona


Since the last Nuit Blanche in Review (June 2019), we've have a few in-depth material, two posts about LightOn, a hardware implementation, some conferences, courses and some job announcements. Enjoy !

In-depth:
LightOn
CS Hardware:
Conferences:
Courses:
Jobs:


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Tuesday, August 27, 2019

PRAIRIE AI Summer School, Paris, October 3-5th 2019

** Nuit Blanche is now on Twitter: @NuitBlog **




Julien just sent me the following:

Bonjour Igor, 
I would like to advertise the following event, which should be of interest for the readers of Nuit Blanche: https://project.inria.fr/paiss/This is an AI summer school located in Paris, which will take place from October 3 to 5th. (Application deadline is September 6th). The speakers will be

Sure, Julien ! Here is the page.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Monday, August 26, 2019

LightOn’s Summer Blog Post Series:  Faith No Moore and A New Hope

** Nuit Blanche is now on Twitter: @NuitBlog **


Here are two installments of our Summer series on what we do at LightOn. These two first blog posts provide some context about why we think there is a need for our technology. All this in the context of this past week's announcements by Intel that is releasing its 10nm chip (more here), Cerebras' announcement of its 15kW, trillion transistor chip or Habana's Gaudi chip.








We are expecting two more blog posts, please follow us on Medium.

Both posts were written by Julien Launay a Machine Learning R&D engineer at LightOn AI Research.


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Tuesday, August 20, 2019

Transfer Learning as a Tool for Reducing Simulation Bias: Application to Inertial Confinement Fusion

** Nuit Blanche is now on Twitter: @NuitBlog **

Using Transfer learning for exploration purposes in expensive Inertial Confinement Fusion experiments is probably the only way to speed up our exploration of the right parameters in this nuclear fusion quest. 




Transfer Learning as a Tool for Reducing Simulation Bias: Application to Inertial Confinement Fusion. by B. Kustowski , Jim A. Gaffney , Brian K. Spears , Gemma J. Anderson , Jayaraman Jayaraman Thiagarajan , and Rushil Anirudh

We adapt a technique, known in the machine learning community as transfer learning, to reduce the bias of a computer simulation using very sparse experimental data. Unlike the Bayesian calibration, which is commonly used to estimate the simulation bias, transfer learning involves calculating an artificial neural network surrogate model of the simulations. Assuming that the simulation code correctly predicts trends in the experimental data but it is subject to unknown biases, we then partially retrain, or transfer learn, the initial surrogate model to match the experimental data. This process eliminates the bias while still taking advantage of the physics relations learned from the simulation. Transfer learning can be easily adapted to a wide range of problems in science and engineering. In this paper, we carry out numerical tests to investigate the applicability of this technique to predict inertial confinement fusion experiments under new conditions. Using our synthetic validation data set we demonstrate that an accurate predictive model can be built by retraining an initial surrogate model with experimental data volumes so small that they are relevant to the inertial confinement fusion problem. This opens up new opportunities for knowledge transfer and building predictive models in physics. After implementing transfer learning in a standard neural network, we successfully extended the method to a more complex, generative adversarial network architecture, which will be needed for predicting not only scalars but also diagnostic images in our future work.



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Monday, August 19, 2019

Enhanced Seismic Imaging with Predictive Neural Networks for Geophysics

** Nuit Blanche is now on Twitter: @NuitBlog **

This week, we will look into how some inverse problems. 




We propose a predictive neural network architecture that can be utilized to update reference velocity models as inputs to full waveform inversion. Deep learning models are explored to augment velocity model building workflows during 3D seismic volume reprocessing in salt-prone environments. Specifically, a neural network architecture, with 3D convolutional, de-convolutional layers, and 3D max-pooling, is designed to take standard amplitude 3D seismic volumes as an input. Enhanced data augmentations through generative adversarial networks and a weighted loss function enable the network to train with few sparsely annotated slices. Batch normalization is also applied for faster convergence. Moreover, a 3D probability cube for salt bodies is generated through ensembles of predictions from multiple models in order to reduce variance. Velocity models inferred from the proposed networks provide opportunities for FWI forward models to converge faster with an initial condition closer to the true model. In each iteration step, the probability cubes of salt bodies inferred from the proposed networks can be used as a regularization term in FWI forward modelling, which may result in an improved velocity model estimation while the output of seismic migration can be utilized as an input of the 3D neural network for subsequent iterations.






Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Friday, August 16, 2019

Job: Several postdocs, Ground Breaking Deep Learning Technology for Monitoring the Brain during Surgery with Commercialization Opportunity, University of Pittsburgh

** Nuit Blanche is now on Twitter: @NuitBlog **

Kayhan just sent me the following:

Dear Igor,

I hope you are doing well.

I don't know if you remember me but we have been in contact a few times while I was a Ph.D. student at UPenn.

My lab at the University of Pittsburgh has several postdoc positions open. More specifically, I would be thankful if you could advertise this position (Link: https://kayhan.dbmi.pitt.edu/sites/default/files/JobAd.pdf) to your audience.

Best,
Kayhan

Sure Kayhan, I remember! Here is the announcement:
Ground Breaking Deep Learning Technology for Monitoring the Brain during Surgery with Commercialization Opportunity 

We are developing a clinical tool based on deep learning to automatically detect stroke during surgery and alert the surgical team to avert complications and save lives. We are uniquely positioned at the intersection of the largest health care system in the US, the University of Pittsburgh Medical Center (UPMC), and top ranked academic institutions, the University of Pittsburgh (Pitt) and the Carnegie Mellon University (CMU). Our group consists of Pitt and UPMC faculty members who have complementary expertise in machine learning and in healthcare and specifically in deep learning, clinical informatics, neurology, and surgery. We develop novel deep learning and other machine learning methods for application to challenging clinical problems. We are very well funded by NIH, NSF, industry, and internal institutional grants.  
In the current project, we are developing a clinical tool that will automatically detect stroke and other adverse events during surgery from an array of monitoring information, and provide highly accurate real time alerts to the surgical team to make course corrections during surgery. The clinical tool is to be deployed in operating rooms for monitoring surgeries and providing high quality alerts.
The successful candidate will work with us in a highly collaborative environment that spans the computer laboratory and the operating room and will gain unique and valuable experience in deep learning, development of a tool for a clinical setting, and in commercialization.  
Expected qualifications Genuinely motivated to develop and apply machine learning to clinical problems. Strong expertise in machine learning is required; expertise in statistics and experience with messy clinical data is a plus. Python fluency is required. Demonstrated ability to make meaningful contributions to projects with a research flavor is valuable.  
Experience/Abilities
• Hands-on experience building predictive models
• Experience working with diverse data types including signal and structured data; experience with text data is a plus
• Experience in programming in Python; experience in additional languages (R, C/C++) is a plus
• Aware of current best practices in machine learning
• Fluency in one of the deep learning frameworks is a plus (PyTorch or Tensorflow)
• Knowledge of statistics, including hypothesis testing with parametric and non-parametric tests and basic probability
• PhD in computer science, electrical engineering, statistics or equivalent computational / quantitative fields (exceptional MS candidates will be considered)  
The goal of this project is to develop, evaluate and commercialize a tool for automatic detection of stroke during surgery. The successful candidate will have the rare opportunity to perform cutting-edge deep learning research and participate in a commercial endeavor.  
If interested, contact Shyam Visweswaran, MD, PhD at shv3@pitt.edu and Kayhan Batmanghelich, PhD at kayhan@pitt.edu. For details of ongoing research work, visit http://www.thevislab.com/ and https://kayhan.dbmi.pitt.edu/. The University of Pittsburgh is an Affirmative Action/Equal Opportunity Employer and values equality of opportunity, human dignity, and diversity. 


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Printfriendly