Thursday, December 05, 2013

#NIPS2013 papers, workshops ...

So #NIPS2013 is starting today with a set of tutorials, and a set of workshops listed below. Two words first, if you are prude or at work don't go watch the photos on Twitter (on your desktop) for the #NIPS2013 hashtag just yet! Second, for those of you in Paris next week, we'll have our 6th ML meetup. Third Andrej Karpathy has a nicer way of viewing the NIPS proceedings. It is here.

Without further due:

Here are a few papers I found interesting but the whole electronic proceeding is here (the whole pdf is here):

Several posters from the workshops are listed below:

    CONTRIBUTED TALKS

    • Mu Li, Li Zhou, Zichao Yang, Aaron Li, Fei Xia, David Andersen and Alexander Smola.
      Parameter Server for Distributed Machine Learning
      We propose a parameter server framework to solve distributed machine learning problems. Both data and workload are distributed into client nodes, while server nodes maintain globally shared parameters, which are represented as sparse vectors and matrices. The framework manages asynchronous data communications between clients and servers. Flexible consistency models, elastic scalability and fault tolerance are supported by this framework. We present algorithms and theoretical analysis for challenging nonconvex and nonsmooth problems. To demonstrate the scalability of the proposed framework, we show experimental results on real data with billions of parameters.
      PDF
    • Yarin Gal and Zoubin Ghahramani.
      Pitfalls in the use of Parallel Inference for the Dirichlet Process
      Recent work done by Lovell, Adams, and Mansingka [2012] and Williamson, Dubey, and Xing [2013] has suggested an alternative parametrisation for the Dirichlet process in order to derive non-approximate parallel MCMC inference for it. This approach to parallelisation has been picked-up and implemented in several different fields [Chahuneau et al., 2013, Pan et al., 2013]. In this paper we show that the approach suggested is impractical due to an extremely unbalanced distribution of the data. We characterise the requirements of efficient parallel inference for the Dirichlet process and show that the proposed inference fails most of these conditions (while approximate approaches often satisfy most of them). We present both theoretical and experimental evidence of this, analysing the load balance for the inference showing that it is independent of the size of the dataset and the number of nodes available in the parallel implementation, and end with preliminary suggestions of alternative paths of research for efficient non-approximate parallel inference for the Dirichlet process.
      PDF
    • Yingyu Liang, Maria-Florina Balcan and Vandana Kanchanapally.
      Distributed PCA and k-Means Clustering
      This paper proposes a distributed PCA algorithm, with the theoretical guarantee that any good approximation solution on the projected data for k-means clustering is also a good approximation on the original data, while the projected dimension required is independent of the original dimension. When combined with the distributed coreset-based clustering approach in [3], this leads to an algorithm in which the number of vectors communicated is independent of the size and the dimension of the original data. Our experiment results demonstrate the effectiveness of the algorithm.
      PDF

    POSTERS

    • Julien-Charles Lévesque, Christian Gagné and Robert Sabourin.
      Ensembles of Budgeted Kernel Support Vector Machines for Parallel Large Scale Learning
      In this work, we propose to combine multiple budgeted kernel support vector machines (SVMs) trained with stochastic gradient descent (SGD) in order to exploit large databases and parallel computing resources. The variance induced by budget restrictions of the kernel SVMs is reduced through the averaging of predictions, resulting in greater generalization performance. The variance of the trainings results in a diversity of predictions, which can help explain the better performance. Finally, the proposed method is intrinsically parallel, which means that parallel computing resources can be exploited in a straightforward manner.
      PDF
    • Zhen Qin, Vaclav Petricek, Nikos Karampatziakis, Lihong Li and John Langford.
      Efficient Online Bootstrapping for Large Scale Learning
      Bootstrapping is a useful technique for estimating the uncertainty of a predictor, for example, confidence intervals for prediction. It is typically used on small to moderate sized datasets, due to its high computation cost. This work describes a highly scalable online bootstrapping strategy, implemented inside Vowpal Wabbit, that is several times faster than traditional strategies. Our experiments indicate that, in addition to providing a black box-like method for estimating uncertainty, our implementation of online bootstrapping may also help to train models with better prediction performance due to model averaging.
      PDF
    • Arun Kumar, Nikos Karampatziakis, Paul Mineiro, Markus Weimer and Vijay Narayanan.
      Distributed and Scalable PCA in the Cloud
      Principal Component Analysis (CA) is a popular technique with many applications. Recent randomized PCA algorithms scale to large datasets but face a bottleneck when the number of features is also large. We propose to mitigate this issue using a composition of structured and unstructured randomness within a randomized PCA algorithm. Initial experiments using a large graph dataset from Twitter show promising results. We demonstrate the scalability of our algorithm by implementing it both on Hadoop, and a more flexible platform named REEF.
      PDF
    • Nedim Lipka.
      Towards Distributed Reinforcement Learning for Digital Marketing with Spark
      A variety of problems in digital marketing can be modeled as Markov decision processes and solved by dynamic programming with the goal of calculating the policy that maximizes the expected discounted reward. Algorithms, such as policy iteration, require a state transition and a reward model, which can be estimated based on a given data set. In this paper, we compare the execution times for estimating the transition function in a map-reduce fashion if the data set becomes large in terms of the number of records and features. Therefore, we create different-sized Spark and Hadoop clusters in the Amazon cloud computing environment. The in-memory clustering system Spark is outperforming Hadoop and runs up to 71% faster. Furthermore, we study the execution times of policy iteration running on Spark clusters and show the execution time reduction gained by increasing the number of instances in the cluster.
      PDF
    • Tuukka Ruotsalo, Jaakko Peltonen, Manuel J. A. Eugster, Dorota Glowacka, Giulio Jacucci, Aki Reijonen and Samuel Kaski.
      Lost in Publications? How to Find Your Way in 50 Million Scientific Documents
      Researchers must navigate big data. Current scientific knowledge includes 50 million published articles. How can a system help a researcher find relevant documents in her field? We introduce IntentRadar, an interactive search user interface and search engine that anticipates userâ™s search intents by estimating them form userâ™s interaction with the interface. The estimated intents are visualized on a radial layout that organizes potential intents as directions in the information space. The intent radar assists users to direct their search by allowing feedback to be targeted on keywords that represent the potential intents. Users can provide feedback by manipulating the position of the keywords on the radar. The system then learns and visualizes improved estimates and corresponding documents. IntentRadar has been shown to significantly improve usersâ™ task performance and the quality of retrieved information without compromising task execution time.
      PDF
    • Michael Kane and Bryan Lewis.
      cnidaria: A Generative Communication Approach to Scalable, Distributed Learning
      This paper presents a scalable, software framework that facilitates large-scale learning and numerical computing. Unlike existing MapReduce frameworks our design is not limited to embarrassingly parallel computing challenges. The framework sits on top of existing storage infrastructures and results of a computation may left out on the cluster (a reduce step is not required). Unlike existing distributed numerical frameworks the proposed framework is elastic and works with both dense and sparse data representations. This generality is achieved through a generative communication scheme whose expressions are either consumed by the distributed computing environment or used to move data, in a peer-to-peer (P2P) fashion, between nodes in a cluster/cloud. This approach integrates advances in the both cloud computing and the distributed numerical computing community and can be applied to a general class of learning challenges.
      PDF
    • Anshumali Shrivastava and Ping Li.
      Beyond Pairwise: Provably Fast Algorithms for Approximate k-Way Similarity Search
      We go beyond the notion of pairwise similarity and look into search problems with k-way similarity functions. In this paper, we focus on problems related to 3-way Jaccard similarity. We show that approximate R3way similarity search problems admit fast algorithms with provable guarantees, analogous to the pairwise case. Our analysis and speedup guarantees naturally extend to k-way resemblance. In the process, we extend traditional framework of locality sensitive hashing (LSH) to handle higher-order similarities, which could be of independent theoretical interest. The applicability of R3way search is shown on the Google Sets application as well as in an application for improving retrieval quality.
      PDF
    • Wei Dai, Jinliang Wei, Xun Zheng, Jin Kyu Kim, Seunghak Lee, Junming Yin, Qirong Ho and Eric Xing.
      Petuum: A System for Iterative-Convergent Distributed ML
      A major bottleneck to applying advanced ML programs at industrial scales is the migration of an academic implementation, often specialized for a small, wellcontrolled computer platform such as desktop PCs and small lab-clusters, to a big, less predicable platform such as a corporate cluster or the cloud. This poses enormous challenges: how does one train huge models with billions of parameters on massive data, especially when substantial expertise is required to handle many low-level systems issues? We propose a new architecture of systems components that systematically addresses these challenges, thus providing a generalpurpose distributed platform for Big Machine Learning. Our architecture specifically exploits the fact that many ML programs are fundamentally loss function minimization problems, and that their iterative-convergent nature presents many unique opportunities to minimize loss, such as via dynamic variable scheduling and error-bounded consistency models for synchronization. Thus, we treat data, parameter and variable blocks as computing units to be dynamically scheduled and updated in an error-bounded manner, with the goal of minimizing the loss function as quickly as possible.
      PDF
    • Haiqin Yang, Junjie Hu, Michael Lyu and Irwin King.
      Online Imbalanced Learning with Kernels
      Imbalanced learning, or learning from imbalanced data, is a challenging problem in both academy and industry. Nowadays, the streaming imbalanced data become popular and trigger the volume, velocity, and variety issues of learning from these data. To tackle these issues, online learning algorithms are proposed to learn a linear classifier via maximizing the AUC score. However, the developed linear classifiers ignore the learning power of kernels. In this paper, we therefore propose online imbalanced learning with kernels (OILK) to exploit the non-linearity and heterogeneity embedded in the imbalanced data. Different from previously proposed work, we optimize the AUC score to learn a non-linear representation via the kernel trick. To relieve the computational and storing cost, we also investigate different buffer update policies, including first-in-first-out (FIFO) and reservoir sampling (RS), to maintain a fixed budgeted buffer on the number of support vectors. We demonstrate the properties of our proposed OILK through detailed experiments.
      PDF
    • Alex Beutel, Abhimanu Kumar, Evangelos Papalexakis, Partha Pratim Talukdar, Christos Faloutsos and Eric Xing.
      FLEXIFACT: Scalable Flexible Factorization of Coupled Tensors on Hadoop
      Given multiple data sets of relational data that share a number of dimensions, how can we efficiently decompose our data into the latent factors? Factorization of a single matrix or tensor has attracted much attention, as, e.g., in the Netflix challenge, with users rating movies. However, we often have additional, side, information, like, e.g., demographic data about the users, in the Netflix example above. Incorporating the additional information leads to the coupled factorization problem. So far, it has been solved for relatively small datasets. We provide a distributed, scalable method for decomposing matrices, tensors, and coupled data sets through stochastic gradient descent on a variety of objective functions. We offer the following contributions: (1) Versatility: Our algorithm can perform matrix, tensor, and coupled factorization, with flexible objective functions including the Frobenius norm, Frobenius norm with an l1 induced sparsity, and non-negative factorization. (2) Scalability: FLEXIFACT scales to unprecedented sizes in both the data and model, with up to billions of parameters. FLEXIFACT runs on standard Hadoop. (3) Convergence proofs showing that FLEXIFACT converges on the variety of objective functions, even with projections.
      PDF
    • Faraz Makari Manshadi and Rainer Gemulla.
      A Distributed Approximation Algorithm for Mixed Packing-Covering Linear Programs
      Mixed packing-covering linear programs capture a simple but expressive subclass of linear programs. They commonly arise as linear programming relaxations of a number important combinatorial problems, including various network design and generalized matching problems. In this paper, we propose an efficient distributed approximation algorithm for solving mixed packing-covering problems which requires a poly-logarithmic number of passes over the input. Our algorithm is well-suited for parallel processing on GPUs, in shared-memory architectures, or on small clusters of commodity nodes. We report results of a case study for generalized bipartite matching problems.
      PDF
    • Artem Sokolov and Stefan Riezler.
      Task-driven Greedy Learning of Feature Hashing Functions
      Randomly hashing multiple features into one aggregated feature is routinely used in largescale machine learning tasks to both increase speed and decrease memory requirements, with little or no sacrifice in performance. In this paper we investigate whether using a learned (instead of a random) hashing function improves performance. We show experimentally that with increasing difference between the dimensionalities of the input space and the hashed space, learning hashes is increasingly useful compared to random hashing.
      PDF
    • Ahmed Elgohary, Ahmed Farahat, Mohamed Kamel and Fakhri Karray.
      Approximate Nearest Centroid Embedding for Kernel $k$-Means
      This paper proposes an efficient embedding method for scaling kernel k-means on cloud infrastructures. The embedding method allows for approximating the computation of the nearest centroid to each data instance and, accordingly, it eliminates the quadratic space and time complexities of the cluster assignment step in the kernel k-means algorithm. We show that the proposed embedding method is effective under memory and computing power constraints, and that it achieves better clustering performance compared to other approximations of the kernel kmeans algorithm.
      PDF
    • Yisheng Liao, Alex Rubinsteyn, Russell Power and Jinyang Li.
      Learning Random Forests on the GPU
      Random Forests are a popular and powerful machine learning technique, with several fast multi-core CPU implementations. Since many other machine learning methods have seen impressive speedups from GPU implementations, applying GPU acceleration to random forests seems like a natural fit. Previous attempts to use GPUs have relied on coarse-grained task parallelism and have yielded inconclusive or unsatisfying results. We introduce CudaTree, a GPU Random Forest implementation which adaptively switches between data and task parallelism. We show that, for larger datasets, this algorithm is faster than highly tuned multi-core CPU implementations.
      PDF
    • Shravan Narayanamurthy, Markus Weimer, Dhruv Mahajan, Tyson Condie, Sundararajan Sellamanickam and S. Sathiya Keerthi.
      Towards Resource-Elastic Machine Learning

      PDF
    • Ignacio Arnaldo, Kalyan Veeramachaneni and Una-May O'Reilly.
      Building Multiclass Nonlinear Classifiers with GPUs
      The adoption of multiclass classification strategies that train independent binary classifiers becomes challenging when the goal is to retrieve nonlinear models from large datasets and the process requires several passes through the data. In such scenario, the combined use of a search and score algorithm and GPUs allows to obtain binary classifiers in a reduced time. We demonstrate our approach by training a ten class classifier over more than 400K exemplars following the exhaustive Error Correcting Output Code strategy that decomposes into 511 binary problems.
      PDF
    • John Canny and Huasha Zhao.
      BIDMach: Large-scale Learning with Zero Memory Allocation
      This paper describes recent work on the BIDMach toolkit for large-scale machine learning. BIDMach has demonstrated single-node performance that exceeds that of published cluster systems for many common machine-learning task. BIDMach makes full use of both CPU and GPU acceleration (through a sister library BIDMat), and requires only modest hardware (commodity GPUs). One of the challenges of reaching this level of performance is the allocation barrier. While it is simple and expedient to allocate and recycle matrix (or graph) objects in expressions, this approach is too slow to match the arithmetic throughput possible on either GPUs or CPUs. In this paper we describe a caching approach that allows code with complex matrix (graph) expressions to run at massive scale, i.e. multi-terabyte data, with zero memory allocation after initial start-up. We present a number of new benchmarks that leverage this approach.
      PDF
    • Shohei Hido, Satoshi Oda and Seiya Tokui.
      Jubatus: An Open Source Platform for Distributed Online Machine Learning
      Distributed computing is essential for handling very large datasets. Online learning is also promising for learning from rapid data streams. However, it is still an unresolved problem how to combine them for scalable learning and prediction on big data streams. We propose a general computational framework called loose model sharing for online and distributed machine learning. The key is to share only models rather than data between distributed servers. We also introduce Jubatus, an open source software platform based on the framework. Finally, we describe the details of implementing classifier and nearest neighbor algorithms, and discuss our experimental evaluations.
      PDF

Accepted Papers


Poster presentations

Accepted Papers

Linear Bandits, Matrix Completion, and Recommendation Systems [pdf]
Efficient coordinate-descent for orthogonal matrices through Givens rotations [pdf][supplementary]
Improved Greedy Algorithms for Sparse Approximation of a Matrix in terms of Another Matrix [pdf]
Preconditioned Krylov solvers for kernel regression [pdf]
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms [pdf][supplementary]
Dimension Independent Matrix Square using MapReduce [pdf]

  • Active Learning of Intuitive Sound Qualities (Huang, Duvenaud, Arnold, Partridge, and Oberholtzer) [pdf]
    There is often a mismatch between the high-level goals an artist wants to express and what the parameters of a synthesizer allow them to control. To enable composers to directly adjust personalized high-level qualities during sound synthesis, our system actively learns functions that map from the space of synthesizer control parameters to perceived levels of high-level qualities.
  • Automatic Construction and Natural-Language Summarization of Additive Nonparametric Models (Lloyd, Duvenaud, Grosse, Tenenbaum, and Ghahramani) [pdf][supplement1][supplement2]
    To complement recently introduced automatic model-construction and search methods, we demonstrate an automatic model-summarization procedure. After building an additive nonparametric regression model, our method constructs a report which visualizes and explains in words the meaning and relevance of each component. These reports enable human model-checking and the understanding of complex modeling assumptions and structure. We demonstrate this procedure on two time-series, showing that the automatically constructed models identify clearly interpretable structures that can be automatically described in simple natural language.
  • Designing Constructive Machine Learning Models based on Generalied Linear Learning Techniques (Kordjamshidi and Moens) [pdf]
    We propose a general framework for designing machine learning models that deal with constructing complex structures in the output space. The goal is to provide an abstraction layer to easily represent and design constructive learning models. The learning approach is based on generalized linear training techniques, and exploits techniques from combinatorial optimization to deal with the complexity of the underlying inference required in this type of models. This approach also allows to consider global structural characteristics and constraints over the output elements in an efficient training and prediction setting. The use case focuses on building spatial meaning representations from text to instantiate a virtual world.
  • Learning Graphical Concepts (Ellis, Dechter, Adams, and Tenenbaum) [pdf]
    How can machine learning techniques be used to solve problems whose solutions are best represented as computer programs? For example, suppose a researcher wants to design a probabilistic graphical model for a novel domain. Searching the space of probabilistic models automatically is notoriously difficult, especially difficult when latent variables are involved. However, researchers seem able to easily adapt commonly used modeling motifs to new domains. In doing so, they draw on abstractions such as trees, chains, grids and plates to constrain and direct the kinds of models they produce. This suggests that before we ask machine learning algorithms to discover parsimonious models of new domains, we should develop techniques that enable our algorithms to automatically learn these ?graphical concepts? in much the same way that researchers themselves do, by seeing examples in the literature. One natural way to think of these graphical concepts is as programs that take sets of random variables and produce graphical models that relate them. In this work, we describe the CEC algorithm, which attempts to learn a distribution over programs by incrementally finding program components that commonly help to solve problems in a given domain, and we show preliminary results indicating that CEC is able to discover the graphical concepts that underlie many of the common graphical model structures.
  • The Constructive Learning Problem: An Efficient Approach for Hypergraphs (Costa and Sorescu) [pdf]
    Discriminative systems that can deal with input graphs are known, however, generative/constructive approaches that can output (hyper)graphs belonging with high probability to a desired class, are less studied. Here we propose an approach that, differently from common graph grammars inference systems, is computationally efficient and robust to the presence of outliers in the training sample. We report experimental results in a de-novo molecular synthesis problem. We show that we can construct compounds that, once added to the original training set can improve the performance of a binary classification predictor.
  • Analyzing Probabilistic Models Generated by EDAs for Simplified Protein Folding Problems (Santana, Mendiburu, and Lozano) [pdf]
    Estimation of distribution algorithms (EDAs) are optimization methods that construct at each step a probabilistic graphical model (PGM) of the best evaluated solutions. The model serves as a concise representation of the regularities shared by the good solutions and can serve to unveil structural characteristics of the problem domain. In this paper we use the PGMs learned by EDAs in the optimization of 15, 575 instances of the hydrophobic-polar (HP) functional protein folding model to analyze the relationship between the information contained in the PGMs? structures and the quality of the EDA?s solutions.
  • Anticipating the Future By Constructing Human Activities using Object Affordances (Koppula and Saxena) [pdf]
    An important aspect of human perception is anticipation and anticipating which activities will a human do next (and how to do them) in useful for many applications, for example, anticipation enables an assistive robot to plan ahead for reactive responses in the human environments. In this work, we present a constructive approach for generating various possible future human activities by reasoning about the rich spatial-temporal relations through object affordances. We represent each possible future using an anticipatory temporal conditional random field (ATCRF) where we sample the nodes and edges corresponding to future object trajectories and human poses from a generative model. We then represent the distribution over the potential futures using a set of constructed ATCRF particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 75.4%, 69.2% and 58.1% for an anticipation time of 1, 3 and 10 seconds respectively. 1
  • Learning Global-to-Local Discrete Components with Nonparametric Bayesian Feature Construction (Heo, Lee, and Zhang) [pdf]
    Finding common latent components from data is an important step in many data mining applications. These latent variables are typically categorical and there are many sources of categorical variables, including dichotomous, nominal, ordinal, and cardinal values. Thus it is important to be able to represent the discrete components (categories) in a flexible way. Here we propose a nonparametric Bayesian approach to learning "plastic" discrete components by considering the uncertainty of the number of components with the Indian buffet processes (IBP). As observation models, we use the product of experts (PoE) to utilize sharper representation power and sparse over-completeness. We apply the proposed method to optical hand-written digit datasets and demonstrate its capability of finding flexible global-to-local components that can be used to describe and generate the observed digit images faithfully.
  • Racing Tracks Improvisation (Wang and Missura) [pdf][supplement]
    Procedural content generation is a popular technique in the game development. One of its typical applications is generation of game levels. This paper presents a method to generate tracks for racing games, by viewing racing track generation as a discrete sequence prediction problem. To solve it we combine two techniques from music improvisation. We show that this method is capable of generating new racing tracks which appear to be interesting enough.
  • STONES: Stochastic Technique for Generating Songs (Kamp and Manea) [pdf]
    We propose a novel approach for automatically constructing new songs from a set of given compositions that involves sampling a melody line as well as the corresponding harmonies given by chords. The song is sampled from a hierarchical Markov model that captures the implicit properties of good composed songs from a set of existing ones. We empirically show that songs generated by our approach are closer to music composed by humans than those of existing methods.
  • Constructing Cocktails from a Cocktail Map (Paurat, Garnett, and Gärtner) [pdf]
    Consider a dataset that describes cocktails by the amount of ingredients used and a lower dimensional embedding of it that can be considered a map of cocktails. The problem we tackle is to query an arbitrary point of interest in this lower dimensional embedding and retrieve a newly constructed cocktail which embeds to that queried location. To do so, we formulate the task as a constrained optimization problem and consider the resulting ingredient mix as a 'hot' candidate. Starting off with a very basic formulation that merely demands the necessities of our problem to be fulfilled, we incorporate additional desired conditions into the problem formulation and compare the resulting cocktail recipes.
  • Supervised graph summarization for structuring academic search results (Mirylenka and Passerini) [pdf]
    In this paper we address the problem of visualizing the query results of the academic search services. We suggest representing the search results as concise topic hierarchies, and propose a method of building such hierarchies through summarization of the intermediate large topic graphs. We describe a supervised learning technique for summarizing the topic graphs in the most informative way using sequential structured prediction, and discuss our ongoing work on the interactive acquisition of the training examples.
  • Hybrid SRL with Optimization Modulo Theories (Teso, Sebastiani, and Passerini) [pdf]
    Generally speaking, the goal of constructive learning could be seen as, given an example set of structured objects, to generate novel objects with similar properties. From a statistical-relational learning (SRL) viewpoint, the task can be interpreted as a constraint satisfaction problem, i.e. the generated objects must obey a set of soft constraints, whose weights are estimated from the data. Traditional SRL approaches rely on (finite) First-Order Logic (FOL) as a description language, and on MAX-SAT solvers to perform inference. Alas, FOL is unsuited for constructive problems where the objects contain a mixture of Boolean and numerical variables. It is in fact difficult to implement, e.g. linear arithmetic constraints within the language of FOL. In this paper we propose a novel class of hybrid SRL methods that rely on Satisfiability Modulo Theories, an alternative class of formal languages that allow to describe, and reason over, mixed Boolean-numerical objects and constraints. The resulting methods, which we call Learning Modulo Theories, are formulated within the structured output SVM framework, and employ a weighted SMT solver as an optimization oracle to perform efficient inference and discriminative max margin weight learning. We also present a few examples of constructive learning applications enabled by our method.
  1. Varun Aggarwal, Shashank Srikant, and Vinay Shashidhar
    Principles for using Machine Learning in the Assessment of Open Response Items: Programming Assessment as a Case Study
  2. Sumit Basu, Chuck Jacobs and Lucy Vanderwende
    Powergrading: a Clustering Approach to Amplify Human Effort for Short Answer Grading
  3. Franck Dernoncourt, Choung Do, Sherif Halawa, Una-May O’Reilly, Colin Taylor, Kalyan Veeramachaneni and Sherwin Wu
    MOOCVIZ: A Large Scale, Open Access,Collaborative, Data Analytics Platform for MOOCs
  4. Jorge Diez, Oscar Luaces, Amparo Alonso-Betanzos, Alicia Troncoso and Antonio Bahamonde
    Peer Assessment in MOOCs Using Preference Learning via Matrix Factorization
  5. Stephen E. Fancsali
    Data-driven causal modeling of “gaming the system” and off-task behavior in Cognitive Tutor Algebra
  6. Damien Follet
    A three-steps classification algorithm to assist criteria grid assessment
  7. Peter W. Foltz and Mark Rosenstein
    Tracking Student Learning in a State-Wide Implementation of Automated Writing Scoring
  8. Jose P. Gonzalez-Brenes, Yun Huang and Peter Brusilovsky
    FAST: Feature-Aware Student Knowledge Tracing
  9. Fang Han, Kalyan Veeramachaneni and Una-May O’Reilly
    Analyzing student behavior during problem solving in MOOCs
  10. Mohammad Khajah, Rowan M. Wing, Robert V. Lindsey and Michael C. Mozer
    Incorporating Latent Factors Into Knowledge Tracing To Predict Individual Differences In Learning
  11. Robert V. Lindsey, Jeff D. Shroyer, Harold Pashler and Michael C. Mozer
    Improving students’ long-term knowledge retention through personalized review
  12. Yun-En Liu, Travis Mandel, Zoran Popovic and Emma Brunskill
    Towards Automatic Experimentation of Educational Knowledge
  13. Andras Lorincz, Gyongyver Molnar, Laszlo A. Jeni, Zoltan Toser, Attila Rausch and Jeffrey F. Cohn
    Towards entertaining and efficient educational games
  14. Travis Mandel, Yun-En Liu, Zoran Popovic, Sergey Levin and Emma Brunskill
    Unbiased Offline Evaluation of Policy Representations for Educational Games
  15. Sergiy Nesterko, Svetlana Dotsenko, Qiuyi Hu, Daniel Seaton, Justin Reich, Isaac Chuang, and Andrew Ho
    Evaluating Geographic Data in MOOCs
  16. Andy Nguyen, Christopher Piech, Jonathan Huang and Leonidas Guibas
    Codewebs: Scalable Code Search for MOOCs
  17. Zachary A. Pardos
    Simulation study of a HMM based automatic resource recommendation system
  18. Arti Ramesh, Dan Goldwasser, Bert Huang, Snigdha Chaturvedi, Hal Daume III and Lise Getoor
    Modeling Learner Engagement in MOOCs using Probabilistic Soft Logic
  19. Nihar B. Shah, Joseph K. Bradley, Abhay Parekh, Martin Wainwright and Kannan Ramchandran
    A Case for Ordinal Peer Evaluation in MOOCs
  20. Adish Singla, Ilija Bogunovic, Gabor Bartok, Amin Karbasi and Andreas Krause
    On Actively Teaching the Crowd to Classify
  21. Glenda S. Stump, Jennifer DeBoer, Jonathan Whittinghill and Lori Breslow
    Development of a Framework to Classify MOOC Discussion Forum Posts: Methodology and Challenges
  22. Weiyi Sun, Siwei Lyu, Hui Jin and Jianwei Zhang
    Analyzing Online Learning Discourse using Probabilistic Topic Models
  23. Joseph Jay Williams
    Applying Cognitive Science to Online Learning
  24. Joseph Jay Williams and Betsy Williams
    Using Interventions to Improve Online Learning
  25. Diyii Yang, Tanmay Sinha, David Adamson and Carolyn Penstein Rose
    “Turn on, Tune in, Drop out”: Anticipating student dropouts in Massive Open Online Courses

Poster Session I

Yuxin Chen, Hiroaki Shioi, Cesar Antonio Fuentes Montesinos, Lian Pin Koh, Serge Wich, Andreas Krause.
Active Detection for Biodiversity Monitoring via Adaptive Submodularity.

Christopher R. Dance, Stephane Clinchant, Onno R. Zoeter.
Approximate Inference for a Non-Homogeneous Poisson Model of On-Street Parking. [pdf]

George Mathews, John Vial, Sanjeev Jha, Gregoire Mariethoz, Nickens Okello, Suhinthan Maheswararajah, Dom De Re, Michael Smith.
Bayesian Inference of the Hydraulic Properties of Deep Geological Formations.

Simon O’Callaghan, Alistair Reid, Lachlan McCalman, Edwin V. Bonilla, Fabio Ramos
Bayesian Joint Inversions for the Exploration and Characterization of Geothermal Targets. [pdf]

Jun Yu, Weng-Keen Wong, Steve Kelling.
Clustering Species Accumulation Curves to Identify Groups of Citizen Scientists with Similar Skill Levels. [pdf]

Kalyan Veeramachaneni, Teasha Feldman-Fitzthum, Una-May O’Reilly, Alfredo Cuesta-Infante.
Copula-Based Wind Resource Assessment. [pdf]

Danny Panknin, Tammo Krueger, Mikio Braun, Klaus-Robert Muller, Siegmund Duell.
Detecting changes in Wind Turbine Sensory Data. [pdf]

Shan Xue, Alan Fern, Daniel Sheldon.
Dynamic Resource Allocation for Optimizing Population Diffusion.

Nidhi Singh.
Green-Aware Workload Prediction for Non-stationary Environments.

Mingjun Zhong, Nigel Goddard, Charles Sutton.
Interleaved Factorial Non-Homogeneous Hidden Markov Models for Energy Disaggregation. [pdf]


Poster Session II

Tao Sun, Daniel Sheldon, Akshat Kumar.
Message Passing for Collective Graphical Models. [pdf]

Jun Yu, Rebecca A. Hutchinson, Weng-Keen Wong.
Modeling Misidentification of Bird Species by Citizen Scientists. [pdf]

Anna Ogawa, Akiko Takeda, Toru Namerikawa.
Photovoltaic Output Prediction Using Auto-regression with Support Vector Machine. [pdf]

Rebecca A. Hutchinson, Thomas G. Dietterich.
Posterior Regularization for Occupancy Models.

Xiaojian Wu , Daniel Sheldon, Shlomo Zilberstein.
Stochastic Network Design for River Networks. [pdf]

Daniel Urieli, Peter Stone.
TacTex’13- An Adaptive Champion Power Trading Agent.

Bingsheng Wang, Haili Dong, Chang-Tien Lu.
Using Step Variant Convolutional Neural Networks for Energy Disaggregation. [pdf]

Angela Fernandez, Carlos M. Alaiz, Ana M. Gonzalez, Julia Diaz, Jose R. Dorronsoro
Local Anisotropic Diffusion Detection of Wind Ramps. [pdf]

Mahsa Ghafrianzadeh, Claire Monteleoni.
Climate Prediction via Matrix Completion. [pdf]


AFTERNOON SESSION (3:30-6:30)


Domain Adaptation as Learning with Auxiliary Information
Shai Ben-David, Ruth Urner

Sample Complexity of Sequential Multi-task Reinforcement Learning
Emma Brunskill, Lihong Li

Sequential Transfer in Multi-armed Bandit with Logarithmic Transfer Regret
Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill

Class-wise Density-ratios for Covariate Shift
Yun-Qian Miao, Ahmed K. Farahat, Mohamed S. Kamel

Domain adaptation for sequence labeling using hidden Markov models

Edouard Grave, Guillaume Obozinski,  Francis Bach

Retrieval of Experiments: Sequential Dirichlet Process Mixtures in Model Space
Ritabrata Dutta, Sohan Seth, Samuel Kaski

Multitask Learning with Feature Selection for Groups of Related Tasks
Meenakshi Mishra, Jun Huan

Restricted Transfer Learning for Text Categorization
Rajhans Samdani, Gideon Mann

Transform-based Domain Adaptation for Big Data
Erik Rodner, Judy Hoffman, Trevor Darrell, Jeff Donahue, Kate Saenko

A PAC-Bayesian bound for Lifelong Learning
Anastasia Pentina, Christoph H. Lampert

Multi-task Bilinear Classifiers for Visual Domain Adaptation
Jiaolong Xu, Sebastian Ramos, Xu Hu, David Vazquez, Antonio M. Lopez

Tree-Based Ensemble Multi-Task Learning Method for Classification and Regression
Jaak Simm, Ildefons Magrans de Abril, Masashi Sugiyama

Domain Adaptation of Majority Votes via Perturbed Variation-based Label Transfer
Emilie Morvant

Multilinear Spectral Regularization for Kernel-based Multitask Learning
Marco Signoretto, Johan A.K. Suykens
Reinforcement Learning with Multi-Fidelity Simulators

Sameer Singh, Sebastian Riedel, and Andrew McCallum. Anytime belief propagation using sparse domains.


W00085459.jpg was taken on December 02, 2013 and received on Earth December 04, 2013. The camera was pointing toward SATURN at approximately 710,353 miles (1,143,202 kilometers) away, and the image was taken using the MT2 and CL2 filters. This image has not been validated or calibrated. 

Image Credit: NASA/JPL/Space Science Institute


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, December 04, 2013

PC-SBL: Pattern-Coupled Sparse Bayesian Learning for Recovery of Block-Sparse Signals - implementation -


I sure would like to see the complete sharp phase transition of this solver: Pattern-Coupled Sparse Bayesian Learning for Recovery of Block-Sparse Signals by Jun Fang, Yanning Shen, Hongbin Li, Pu Wang
We consider the problem of recovering block-sparse signals whose structures are unknown \emph{a priori}. Block-sparse signals with nonzero coefficients occurring in clusters arise naturally in many practical scenarios. However, the knowledge of the block structure is usually unavailable in practice. In this paper, we develop a new sparse Bayesian learning method for recovery of block-sparse signals with unknown cluster patterns. Specifically, a pattern-coupled hierarchical Gaussian prior model is introduced to characterize the statistical dependencies among coefficients, in which a set of hyperparameters are employed to control the sparsity of signal coefficients. Unlike the conventional sparse Bayesian learning framework in which each individual hyperparameter is associated independently with each coefficient, in this paper, the prior for each coefficient not only involves its own hyperparameter, but also the hyperparameters of its immediate neighbors. In doing this way, the sparsity patterns of neighboring coefficients are related to each other and the hierarchical model has the potential to encourage structured-sparse solutions. The hyperparameters, along with the sparse signal, are learned by maximizing their posterior probability via an expectation-maximization (EM) algorithm. Numerical results show that the proposed algorithm presents uniform superiority over other existing methods in a series of experiments. 

How a Recovery Mission became a Rescue Operation

I wonder if there is a simple sensor that can detect air bubbles in shipwrecks and how robotic recovery mission handle the SLAM problem. A while back, I had a discussion with somebody who was doing recovery in the Seine River and he told me that with the right sensor, most of the blur disappeared. Anyway, watch out for what happens starting at 5 minutes and 30 seconds in this Body Recovery mission. Simply awesome!


 


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, December 03, 2013

PSIM: France's Innovation 2030 Program

The French have decided to launch a worldwide competition with prize monies in the vicinity of 406,351,721 million dollars (300 M Euros). If you are interested, you may want to read more about it here:


I was initally curious about the program as it had been described in the press as a DARPA-like Grand Challenge. It's really nothing of the sort. From reading the various documents, here is what I understood. The program is open to any french company (including start-ups) or foreign companies willing to have some activity in France. In fact, unlike the website, everything is clearly spelled out here in this equivalent of the Federal Register:


The overall theme of the program is to

...faire émerger ou renforcer des leaders industriels français sur des marchés considérés comme stratégiques pour les dix prochaines années.


help any emerging or strengthen industrial french leaders in markets considered strategic for the next ten years

In short, this is a three phase program that is not unlike SBA's SBIR and STTR programs housed in every Departments such as DOE, DoD, NASA, etc...at least in the beginning. In the PSIM program, however, the themes are not derived from specific needs identified within the R&D organizations of each Department as one would expect. Rather the (seven) themes were identified by a commission of very important people much like what we would be expecting from a National Academy of Sciences panel. Those themes (also called 'Ambition') are:

  • Ambition n°1 : Innovation projects for intermittent or continuous energy storage / Projets d’innovation en matière de stockage d’énergie intermittente ou non 
  • Ambition n°2 : Projects to enable the viable and efficient recycling of rare metals / Projets permettant de rendre viable et efficace le recyclage des métaux rares
  • Ambition n°3 : Development projects promoting submarine metallic minerals and projects promoting cheaper and / or lower energy desalinsation of seawater / Projets de valorisation des minerais métalliques sous-marins et projets favorisant des solutions de dessalement moins onéreuses et/ou plus faiblement consommatrices d’énergie de l’eau de mer. 
  • Ambition n°4 : Development projects based food products of vegetable proteins and plant chemistry projects to develop new materials /  Projets de développement de produits alimentaires à base de protéines végétales et projets de chimie du végétal visant à développer de nouveaux matériaux. 
  • Ambition n°5 : Projects promoting individualized targeted therapeutic interventions through, for instance, the development of genomics tools, medical devices and / or high-resolution imaging / Projets favorisant le ciblage individualisé des interventions thérapeutiques s’appuyant par exemple sur la génomique, les dispositifs médicaux et/ou l’imagerie à haute résolution 
  • Ambition n°6 : Projects meeting the loss of autonomy by seniors related to robotics and domo-medicine. / Projets répondant à la perte d’autonomie des seniors, liés à la robotique et la domo-médecine. 
  • Ambition n°7 : Projects to better exploit Big Data and define new uses, analysis and valuation of these data. / Projets permettant de mieux exploiter les données et de définir de nouveaux usages, modèles d’analyse et de valorisation de celles-ci. 

The US SBIR program is generally structured in the following manner:

The SBIR program agencies award monetary grants in phases I and II of a three-phase program:[6]
  • Phase I, the startup phase, makes awards of "up to $150,000 for approximately 6 months support [for] exploration of the technical merit or feasibility of an idea or technology."
  • Phase II awards grants of "up to $1 million, for as many as 2 years," in order to facilitate expansion of Phase I results. Research and development work is performed and the developer evaluates the potential for commercialization. Phase II grants are awarded exclusively to Phase I award winners.
  • Phase III is intended to be the time when innovation moves from the laboratory into the marketplace. No additional SBIR funds are awarded for Phase III. "The small business must find funding in the private sector or other non-SBIR federal agency funding."


In the French PSIM program, the names change but here is what we have:

  • Phase 1 called 'Amorçage' is also about 6 months long more or less (there are three inlet periods where the commission fills its portfolio with 100 projects. The commission stops taking on new projects after the 100th). Phase 1 projects cannot be above 200 K Euros. No co-sharing is required. 
  • Then in September 2014, there is a re-compete to get into phase 2 called 'Levée de risque' with companies that have been supported with funds from phase 1 or new entrants who did not need the monies to evaluate the technical feasibility of their products. From these 100 companies + newcomers; the commission downselects about 5 to 7 per 'Ambition' themes. Some co-sharing is required for that Phase 2 for selected companies. Maximum funding for this phase is 2M Euros and also seems to last about 2 years. 
  • Phase 3 called 'Développement' sees a downselect from the 40 or so companies that were in phase 2 down to 10 companies.  Again, co-sharing is required and the amount the State is willing to part with, can hover in the 40M Euros range for each of the Phase 3 projects.


If you want to apply, it's all here. The first uptake date for Phase 1 is January 31, 2014. If you want additional detail that you cannot get from the site, I am not sure there are additional ways to ask questions as is usually the case in the US. I can help in some translation if you are interested. Good luck.

Other references:

Instance Optimality Property Extended: Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems

Anthony Bourrier. just sent me the following:

Hello Igor,
I submitted recently, along with four coauthors, a paper to IEEE Trans-IT about fundamental performance limits of decoders in linear inverse problems on a much more general setting than the usual sparse vectors case. In particular, among other things, it extends relationships between Instance Optimality, Null Space Property and Restricted Isometry Property for any model of vectors, in noiseless and noisy settings.
It is available on arXiv: http://arxiv.org/abs/1311.6239
Is it possible to feature it on your blog Nuit Blanche?
Thanks in advance and best regards,
I sure can feature it. But first a small word to put this paper in context. If you recall this Sunday Morning Insight entry on A Quick Panorama of Sensing from Direct Imaging to Machine Learning, the continuum of approaches used in imaging from direct imaging to neural network based classification tasks could only come from the common vision that what you measure is what is you get. From the very beginning in compressive sensing, people have modeled this "x=x" with what is called Instance Optimality Decoding. The following paper goes beyond the traditional CS framework and tries to address this same Instance Optimality Decoding in a larger context that also include data on manifold but also classification (A is not I for instance). Without further due, here it is: Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems by Anthony Bourrier, Mike E. Davies, Tomer Peleg, Patrick Pérez, Rémi Gribonval

This paper focuses on characterizing the fundamental performance limits that can be expected from an ideal decoder given a general model, ie, a general subset of "simple" vectors of interest. First, we extend the so-called notion of instance optimality of a decoder to settings where one only wishes to reconstruct some part of the original high dimensional vector from a low-dimensional observation. This covers practical settings such as medical imaging of a region of interest, or audio source separation when one is only interested in estimating the contribution of a specific instrument to a musical recording. We define instance optimality relatively to a model much beyond the traditional framework of sparse recovery, and characterize the existence of an instance optimal decoder in terms of joint properties of the model and the considered linear operator. Noiseless and noise-robust settings are both considered. We show somewhat surprisingly that the existence of noise-aware instance optimal decoders for all noise levels implies the existence of a noise-blind decoder. A consequence of our results is that for models that are rich enough to contain an orthonormal basis, the existence of an L2/L2 instance optimal decoder is only possible when the linear operator is not substantially dimension-reducing. This covers well-known cases (sparse vectors, low-rank matrices) as well as a number of seemingly new situations (structured sparsity and sparse inverse covariance matrices for instance). We exhibit an operator-dependent norm which, under a model-specific generalization of the Restricted Isometry Property (RIP), always yields a feasible instance optimality and implies instance optimality with certain familiar atomic norms such as the L1 norm.


Image Credit: NASA/JPL/Space Science Institute
N00218248.jpg was taken on November 30, 2013 and received on Earth December 02, 2013. The camera was pointing toward TITAN at approximately 42,845 miles (68,952 kilometers) away, and the image was taken using the CL1 and CB3 filters.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, December 02, 2013

First-Photon Imaging - implementation -

Thanks to Evelyn Mitchell for spotting this one out:



First-Photon Imaging by Ahmed Kirmani, Dheera Venkatraman , Dongeek Shin, Andrea Colaço, Franco N. C. Wong, Jeffrey H. Shapiro, Vivek K Goyal
Imagers that use their own illumination can capture 3D structure and reflectivity information. With photon-counting detectors, images can be acquired at extremely low photon fluxes. To suppress the Poisson noise inherent in low-flux operation, such imagers typically require hundreds of detected photons per pixel for accurate range and reflectivity determination. We introduce a low-flux imaging technique, called first-photon imaging, which is a computational imager that exploits spatial correlations found in real-world scenes and the physics of low-flux measurements. Our technique recovers 3D structure and reflectivity from the first detected photon at each pixel. We demonstrate simultaneous acquisition of sub-pulse duration range and 4-bit reflectivity information in the presence of high background noise. First-photon imaging may be of considerable value to both microscopy and remote sensing.
The code + data is here. There is also the First-Photon Imaging project page. One should note that some of the authors now work for 3dimtech a company they just started:


and one startup we will follow (if they have an RSS that is). Let us note that this effort is typically an instantiation of a low-bit concept and it would fit well with the QIS architecture of Eric Fossum.

Previously, we had:


High Photon Efficiency Computational Range Imagingusing Spatio-Temporal Statistical Regularization by Ahmed Kirmani, Dheera Venkatraman, Andrea Colaco, Franco N. C. Wong, Vivek K Goyal

We demonstrate 1 photon-per-pixel photon efficiency and sub-pulse-widthrange resolution in megapixel laser range imaging by using a joint spatio-temporal statisticalprocessing framework and by exploiting transform-domain sparsity.

Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor by Kirmani, Ahmed; Colaço, Andrea; Wong, Franco N. C.; Goyal, Vivek K.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, December 01, 2013

Nuit Blanche In Review (November 2013)

This past month, we had some quite interesting ideas condensing into some developments:
  • Sharp phase transitions are now part of our landscape; they can be used for many different purposes (see Sharp Phase Transition links)
  • Multiple hardware and sensors are coming about that use sparsity seeking solvers to reconstruct images or videos. They are, in effect, compressive sensors but you won't find the name compressed sensing near the description of these instruments (see Hardware links)

  • We are also beginning to see results that will allow people to consider nonlinear sensing as an acceptable sensing modalitiy. The big unknown for the time being, is to understand what type of nonlinearities (see Connection to neural networks)

What I have not seen on a regular basis is hardware makers naturally checking the capabilities through the computation of the sharp phase transitions. It is important, not just, on a theoretical basis bit also for them to discover the real capabilities of their systems (see Hardware). I note that this month, we may have witnessed another Donoho-Tao moment, with applied mathematicians, much like pirates, suggesting a new hardware architecture. You don't see this everyday (see here)


And then there is the interesting case of well known Angel Investors who seem to put a good emphasis on sensors ( The Business Side of SensorsPart Deux ).

We live in exciting times. We also had a bunch of interesting interactions this month 
and eventually A Favor to Ask. Most importantly we had a few implementations made available by their authors:
Hardware
Connection to neural networks
Tutorials
Others
Meetups

Sunday Morning Insight
Video
Saturday Morning Videos

Printfriendly