Previously once thought impossible computations are now enabled thanks to random projections in Compressive Sensing to encode the wavefiunction in Quantum mechanics and Random Features in high dimensional kernel evaluations. Woohoo !
.Compact wavefunctions from compressed imaginary time evolution by Jarrod R. McClean, Alán Aspuru-Guzik
Simulation of quantum systems promises to deliver physical and chemical predictions for the frontiers of technology. Unfortunately, the exact representation of these systems is plagued by the exponential growth of dimension with the number of particles, or colloquially, the curse of dimensionality. The success of approximation methods has hinged on the relative simplicity of physical systems with respect to the exponentially complex worst case. Exploiting this relative simplicity has required detailed knowledge of the physical system under study. In this work, we introduce a general and efficient black box method for many-body quantum systems that utilizes technology from compressed sensing to find the most compact wavefunction possible without detailed knowledge of the system. It is a Multicomponent Adaptive Greedy Iterative Compression (MAGIC) scheme. No knowledge is assumed in the structure of the problem other than correct particle statistics. This method can be applied to many quantum systems such as spins, qubits, oscillators, or electronic systems. As an application, we use this technique to compute ground state electronic wavefunctions of hydrogen fluoride and recover 98% of the basis set correlation energy or equivalently 99.996% of the total energy with $50$ configurations out of a possible $10^7$. Building from this compactness, we introduce the idea of nuclear union configuration interaction for improving the description of reaction coordinates and use it to study the dissociation of hydrogen fluoride and the helium dimer.
ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High Dimensions by William B. March, Bo Xiao, George Biros
We present a fast algorithm for kernel summation problems in high-dimensions. These problems appear in computational physics, numerical approximation, non-parametric statistics, and machine learning. In our context, the sums depend on a kernel function that is a pair potential defined on a dataset of points in a high-dimensional Euclidean space. A direct evaluation of the sum scales quadratically with the number of points. Fast kernel summation methods can reduce this cost to linear complexity, but the constants involved do not scale well with the dimensionality of the dataset.
The main algorithmic components of fast kernel summation algorithms are the separation of the kernel sum between near and far field (which is the basis for pruning) and the efficient and accurate approximation of the far field.
We introduce novel methods for pruning and approximating the far field. Our far field approximation requires only kernel evaluations and does not use analytic expansions. Pruning is not done using bounding boxes but rather combinatorially using a sparsified nearest-neighbor graph of the input. The time complexity of our algorithm depends linearly on the ambient dimension. The error in the algorithm depends on the low-rank approximability of the far field, which in turn depends on the kernel function and on the intrinsic dimensionality of the distribution of the points. The error of the far field approximation does not depend on the ambient dimension.
We present the new algorithm along with experimental results that demonstrate its performance. We report results for Gaussian kernel sums for 100 million points in 64 dimensions, for one million points in 1000 dimensions, and for problems in which the Gaussian kernel has a variable bandwidth. To the best of our knowledge, all of these experiments are impossible or prohibitively expensive with existing fast kernel summation methods.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment