- 博客(0)
- 资源 (8)
- 收藏
- 关注
Bioinformatics Algorithms
Bioinformatics Algorithms: An Active Learning Approach
2nd Edition, Vol. I by Phillip Compeau & Pavel Pevzner
2018-09-06
Language Modeling with Gated Convolutional Networks
The pre-dominant approach to language model- ing to date is based on recurrent neural networks. In this paper we present a convolutional approach to language modeling. We introduce a novel gating mechanism that eases gradient propaga- tion and which performs better than the LSTM- style gating of Oord et al. (2016b) despite being simpler. We achieve a new state of the art on WikiText-103 as well as a new best single-GPU result on the Google Billion Word benchmark. In settings where latency is important, our model achieves an order of magnitude speed-up com- pared to a recurrent baseline since computation can be parallelized over time. To our knowledge, this is the first time a non-recurrent approach out- performs strong recurrent models on these tasks.
2018-09-05
Solving the Quantum Many-Body Problem with Artificial Neural Networks
The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the non-trivial correlations encoded in the exponential com- plexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form, for some notable cases of physical interest. We introduce a variational repre- sentation of quantum states based on artificial neural networks with variable number of hidden neurons. A reinforcement-learning scheme is then demonstrated, capable of either finding the ground-state or describing the unitary time evolution of complex interacting quantum systems. We show that this approach achieves very high accuracy in the description of equilibrium and dynamical properties of prototypical interacting spins models in both one and two dimensions, thus offering a new powerful tool to solve the quantum many-body problem.
2018-09-05
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network by Geoffrey Hinton
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions
2018-09-05
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人