WaveNet: A Generative Model for Raw Audio
(Submitted on 12 Sep 2016 (
v1), last revised 19 Sep 2016 (this version, v2))
This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.
Submission history
From: Aäron van den Oord [ view email][v1] Mon, 12 Sep 2016 17:29:40 GMT (3057kb,D)
[v2] Mon, 19 Sep 2016 18:04:35 GMT (3055kb,D)
本文介绍了一种用于生成原始音频波形的深度神经网络WaveNet。该模型完全概率化且自回归,能高效训练长音频数据。应用于语音合成时,其自然度超越现有系统。此外,它还能捕捉不同说话人的特征并进行切换,生成新颖真实的音乐片段。
4920

被折叠的 条评论
为什么被折叠?



