Here are a few uses the Random Features approximation:
Dual Control for Approximate Bayesian Reinforcement Learning by
Edgar D. Klenske,
Philipp Hennig
Control of non-episodic, finite-horizon dynamical systems with uncertain
dynamics poses a tough and elementary case of the exploration-exploitation
trade-off. Bayesian reinforcement learning, reasoning about the effect of
actions and future observations, offers a principled solution, but is
intractable. We review, then extend an old approximate approach from control
theory---where the problem is known as dual control---in the context of modern
regression methods, specifically generalized linear regression. Experiments on
simulated systems show that this framework offers a useful approximation to the
intractable aspects of Bayesian RL, producing structured exploration strategies
that differ from standard RL approaches. We provide simple examples for the use
of this framework in (approximate) Gaussian process regression and feedforward
neural networks for the control of exploration.
Scatter Component Analysis: A Unified Framework for Domain Adaptation and Domain Generalization by
Muhammad Ghifary,
David Balduzzi,
W. Bastiaan Kleijn,
Mengjie Zhang
This paper addresses classification tasks on a particular target domain in
which labeled training data are only available from source domains different
from (but related to) the target. Two closely related frameworks, domain
adaptation and domain generalization, are concerned with such tasks, where the
only difference between those frameworks is the availability of the unlabeled
target data: domain adaptation can leverage unlabeled target information, while
domain generalization cannot.
We propose Scatter Component Analyis (SCA), a fast representation learning
algorithm that can be applied to both domain adaptation and domain
generalization. SCA is based on a simple geometrical measure, i.e., scatter,
which operates on reproducing kernel Hilbert space. SCA finds a representation
that trades between maximizing the separability of classes, minimizing the
mismatch between domains, and maximizing the separability of data; each of
which is quantified through scatter. The optimization problem of SCA can be
reduced to a generalized eigenvalue problem, which results in a fast and exact
solution.
Comprehensive experiments on benchmark cross-domain object recognition
datasets verify that SCA performs much faster than several state-of-the-art
algorithms and also provides state-of-the-art classification accuracy in both
domain adaptation and domain generalization. We also show that scatter can be
used to establish a theoretical generalization bound in the case of domain
adaptation.
Compact explicit feature maps provide a practical framework to scale kernel methods to large-scale learning, but deriving such maps for many types of kernels remains a challenging open problem. Among the commonly used kernels for non-linear classification are polynomial kernels, for which low approximation error has thus far necessitated explicit feature maps of large dimensionality, especially for higher-order polynomials. Meanwhile, because polynomial kernels are unbounded, they are frequently applied to data that has been normalized to unit `2 norm. The question we address in this work is: if we know a priori that data is so normalized, can we devise a more compact map? We show that a putative affirmative answer to this question based on Random Fourier Features is impossible in this setting, and introduce a new approximation paradigm, Spherical Random Fourier (SRF) features, which circumvents these issues and delivers a compact approximation to polynomial kernels for data on the unit sphere. Compared to prior work, SRF features are less rank-deficient, more compact, and achieve better kernel approximation, especially for higher-order polynomials. The resulting predictions have lower variance and typically yield better classification accuracy.
Image Credit: NASA/JPL-Caltech/Space Science Institute
NASA's Cassini spacecraft zoomed by Saturn's icy moon Enceladus on Oct. 14, 2015, capturing this stunning image of the moon's north pole. A
companion view from the wide-angle camera (
PIA20010) shows a zoomed out view of the same region for context. Scientists expected the north polar region of Enceladus to be heavily
cratered, based on low-resolution images from the Voyager mission, but
high-resolution Cassini images show a landscape of stark contrasts. Thin
cracks cross over the pole -- the northernmost extent of a global
system of such fractures. Before this Cassini flyby, scientists did not
know if the fractures extended so far north on Enceladus. North on Enceladus is up. The image was taken in visible green light with the Cassini spacecraft narrow-angle camera.
The view was acquired at a distance of approximately 4,000 miles
(6,000 kilometers) from Enceladus and at a Sun-Enceladus-spacecraft, or
phase, angle of 9 degrees. Image scale is 115 feet (35 meters) per
pixel.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.