As I was reading this presentation Universal Denoising and Approximate Message Passing by Dror Baron , Yanting Ma and Junan Zhu, I realized I had not covered the ITA conference yet.
Here are some of the abstracts of the presentations made there, some of which have been covered here on Nuit Blanche already. Enjoy !
Zhu
- Capacity-Approaching Sparse-Graph Code for Compressive Phase Retrieval Ramtin Pedarsani, UC Berkeley; Kangwook Lee, UC Berkeley; Kannan Ramchandran*, UC Berkeley
Finding globally optimal k-means clusterings through convex relaxation Rachel Ward*, UT Austin
- Limits on Support Recovery with Probabilistic Models: An Information-Spectrum Approach Jonathan Scarlett*, EPFL; Volkan Cevher, EPFL
- Locating outliers in large matrices with adaptive compressive sampling Xingguo Li, University Of Minnesota; Jarvis Haupt*, University Of Minnesota
- Compressive modeling of stationary autoregressive processes Georg Kail, Delft University Of Technology; Geert Leus*, Delft University Of Technology
- The benefits of nonconvex optimization for compressed sensing, Galen Reeves*, Duke
- A totally unimodular view of structured sparsity Volkan Cevher*, Swiss Federal Institute of Technology in Lausanne
- Stable Super-Resolution of Positive Sources Veniamin Morgenshtern*, Stanford; Emmanuel Candes, Stanford
-
Information guided sequential sensing Yao Xie*, Georgia Tech; Sebastian Pokutta, Georgia Tech; Gabor Braun, Georgia Tech
- Doubly sparse compressed sensing Grigory Kabatiansky*, IITP RAS; Serge Vladuts, Aix-Marseille University, France; Cedric Tavernier, Assystem, France
- Sub-linear Time Compressed Sensing using Sparse-Graph Codes, Xiao Li*, UC Berkeley; Sameer Pawar, UC Berkeley; Kannan Ramchandran, UC Berkeley
- Universal Denoising and Approximate Message Passing, Dror Baron*, NC State
- Universal compressed sensing of Markov sources Shirin Jalali*, Princeton; H. Vincent Poor, Princeton
- Generalized Compressive Interferometry for Optical Modal Analysis George Atia*, University of Central Florida; Ayman Abouraddy, University of Central Florida; Davood Mardani, University of Central Florida
- Optimal Data-Dependent Hashing for Nearest Neighbors Alexandr Andoni*, Simons Institute for the Theory of Computing, UC Berkeley; Ilya Razenshteyn, MIT
- Streaming and space efficient approximation algorithms for Subset Sum Anna Gal*, UT Austin; Jing-Tang Jang, Google; Nutan Limaye, IIT Bombay; Meena Mahajan, Institute of Mathematical Sciences Chennai; Karteek Sreenivasaiah, Max Planck Institute for Informatics
- Nearly optimal and deterministic sparse Hadamard transform Mahdi Cheraghchi*, UC Berkeley; Piotr Indyk, MIT
- On the stability of collaborative dictionary learning Waheed Bajwa*, Rutgers
- Learning Minimal Latent Directed Information Polytrees Negar Kiyavash*, UIUC
- Exploration-Exploitation: a low rank matrix completion solution and performance bounds Urbashi Mitra*, USC; Sunav Choudhary, USC
- Low-rank methods to address the big data issues in the synchrophasor measurements in the power system Meng Wang*, RPI; Pengzhi Gao, RPI; Joe H. Chow, RPI; Scott Ghiocel, RPI; Bruce Fardanesh, New York Power Authority; George Stefopoulos, New York Power Authority
- THE NETFLIX RECOMMENDER SYSTEM: WHAT IT IS AND HOW WE IMPROVE IT Carlos Gomez-Uribe*, Netflix, Inc.
- Optimal CUR Matrix Decompositions, Christos Boutsidis*, Yahoo; David Woodruff, IBM
- Single pass spectral sparsification in dynamic streams, Michael Kapralov*, IBM T.J Watson; Yin-Tat Lee, MIT; Cameron Musco, MIT; Christopher Musco, MIT; Aaron Sidford, MIT
- Sketching for M-Estimators: A Unified Approach to Robust Regression Ken Clarkson*, IBM Research - Almaden; David Woodruff, IBM Almaden
-
Subspace embeddings and learning applications Haim Avron, IBM T.J Watson; Jelani Nelson, Harvard; Huy Nguyen*, UC Berkeley; David Woodruff, IBM Almaden
- Elementary estimators for high-dimensional statistical models Eunho Yang, IBM T.J Watson; Aurelie Lozano, IBM T.J Watson; Pradeep Ravikumar*, UT Austin
- Inference in High-Dimensional Varying Coefficient Models Mladen Kolar*, University Of Chicago; Damian Kozbur, ETH
- Stochastic Iterative Greedy Algorithms for Sparse Reconstruction Nam Nguyen, MIT; Deanna Needell*, Claremont McKenna College; Tina Woolf, Claremont Graduate University
- High dimensional variable selection: a decision theoretic approach Oluwasanmi Koyejo*, Stanford; Rajiv Khanna, University Of Texas; Joydeep Ghosh, University Of Texas; Russell Poldrack, Stanford
- Big Tensor Subspace Learning for Dynamic MRI Georgios B. Giannakis*, University Of Minnesota
- Clustering with Distributed Data: A Consensus+Innovations Approach Soummya Kar*, CMU
-
Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints Ben Recht*, UC Berkeley; Laurent Lessard, UC Berkeley; Andrew Packard, UC Berkeley
- Non-convex Robust PCA Praneeth Netrapalli, Microsoft Research; U. Niranjan, UC Irvine; Sujay Sanghavi*, UT Austin; Anima Anandkumar, UC Irvine; Prateek Jain, Microsoft Research
- Simple, Efficient and Neural Algorithms for Sparse Coding Ankur Moitra*, MIT
- Correlation decay, Phase transitions, and Counting Alistair Sinclair, UC Berkeley; Piyush Srivastava*, Caltech; Daniel Stefankovic, University Of Rochester; Yitong Yin, Nanjing University
- K-clustering of Semi-Random graphs, Alexandra Kolla*, UIUC
- Incremental clustering: The case for extra clusters, Margareta Ackerman*, Florida State University; Sanjoy Dasgupta, UC San Diego
- The Lovász Local Lemma as a Random Walk, Dimitris Achlioptas*, UC Santa Cruz; Fotis Iliopoulos, UC Berkeley
- Inverse Rendering using Color+Depth Camera, Ha Nguyen, EPFL; Minh Do*, University Of Illinois
- Environmental information from noise, Peter Gerstoft*, UC San Diego
- Clustering techniques applied to the human seismic footprint, Nima Riahi*, UC San Diego; Peter Gerstoft, UC San Diego
- Earthquake Ground Motion Prediction using Seismic Noise, Marine Denolle*, UC San Diego
- Parametric Bilinear Generalized Approximate Message Passing, Phil Schniter*, Ohio State; Jason Parker, AFRL
- Capacity-achieving sparse regression codes via approximate message passing decoding Cynthia Rush, Yale; Adam Greig, Cambridge; Ramji Venkataramanan*, Cambridge
- Algorithms for Parameter Estimation in Mixture Models, Aditya Bhaskara*, Google
- Control-based analog-to-digital conversion without sampling and quantization, Hans-Andrea Loeliger*, ETH; Georg Wilckens, ETH
- Sub-Nyquist Detection and Estimation Using Equivalent-time Sampling, Tarig Ballal, King Abdullah University of Science and Technology ; Tareq Al-Naffouri*, King Abdullah University of Science and Technology
- Estimation with Norm Regularization, Arindam Banerjee*, University Of Minnesota; Sheng Chen, University Of Minnesota; Farideh Fazayeli, University Of Minnesota; Vidyashankar Sivakumar, University Of Minnesota
- Learning Probability Distributions over Structured Spaces, Arthur Choi, UC Los Angeles; Guy Van den Broeck, KU Leuven; Adnan Darwiche*, UC Los Angeles
- Polar codes for the broadcast channel with confidential messages and constrained randomization, Rémi Chou, Georgia Tech; Matthieu Bloch*, Georgia Tech
- Pipelining for Accuracy in Randomized Digital Computation, Marc Riedel*, University Of Minnesota
- Learning from pairwise comparisons, Mark Davenport*, Georgia Tech
- See-Through Imaging and Occupancy Estimation with RF Signals, Yasamin Mostofi*, UC Santa Barbara
- Clustering multi-way data: A novel algebraic approach Eric Kernfeld, University Of Washington; Misha Kilmer, Tufts University; Shuchin Aeron*, Tufts University
- A graph sampling perspective for semi-supervised learning Aly El Gamal*, USC; Aamir Anis, USC; Salman Avestimehr, USC; Antonio Ortega, USC
- Stochastic Iterative Greedy Algorithms for Sparse Reconstruction, Nam Nguyen, MIT; Deanna Needell*, Claremont McKenna College; Tina Woolf, Claremont Graduate University
- High dimensional variable selection: a decision theoretic approach, Oluwasanmi Koyejo*, Stanford; Rajiv Khanna, University Of Texas; Joydeep Ghosh, University Of Texas; Russell Poldrack, Stanford
- Elementary estimators for high-dimensional statistical models Eunho Yang, IBM T.J Watson; Aurelie Lozano, IBM T.J Watson; Pradeep Ravikumar*, UT Austin
- Inference in High-Dimensional Varying Coefficient Models, Mladen Kolar*, University Of Chicago; Damian Kozbur, ETH
- Landmarking Manifolds with Gaussian Processes, John Paisley*, Columbia
- Learning in Repeated Auctions Kareem Amin, UPenn; Afshin Rostamizadeh*, Google; Umar Syed, Google
- Machine Learning in Apache Spark Ameet Talwalkar*, UC Los Angeles
-
Optimistic Concurrency Control in the Design and Analysis of Parallel Learning Algorithms, Joseph Gonzalez*, UC Berkeley; Xinghao Pan, UC Berkeley; Stefanie Jegelka, MIT; Joseph Bradley, UC Berkeley; Michael Jordan, UC Berkeley
- Subspace Clustering with Ordered Weighted L1 minimization Ulas Ayaz*, Brown University
Correlation Clustering on Big Graphs, Dimitris Papailiopoulos*, UC Berkeley; Xinghao Pan, UC Berkeley; Benjamin Recht, UC Berkeley; Kannan Ramchandran, UC Berkeley; Michael I. Jordan, UC Berkeley
- Pessimistic active learning using robust bias-aware prediction Anqi Liu, University Of Illinois Chicago; Lev Reyzin*, University Of Illinois Chicago; Brian Ziebart, University Of Illinois Chicago
Here are some videos presentations of some of the talks:
- Interference in finite-sized highly dense millimeter wave networks, Kiran Venugopal, UT Austin Matthew Valenti, West Virginia University Robert Heath, UT Austin
- Information guided sequential compressed sensing, Yao Xie, Georgia Tech
- A Reusable Holdout for Guaranteed Classifier Validation, Moritz Hardt, IBM Research - Almaden
- Unified scaling of polar codes: error exponent, scaling exponent, moderate deviations, and error floors, Marco Mondelli, EPFL Seyed Hamed Hassani, ETH Ruediger Urbanke, EPFL
No comments:
Post a Comment