Title:T-GSA: TRANSFORMER WITH GAUSSIAN-WEIGHTED SELF-ATTENTION FOR SPEECH ENHANCEMENT
-
What’s main claim? Key idea?
This paper introduces a self-supervised speech pre-training method called TERA. The authors use a multi-target auxiliary task to pre-train Transformer Encoders on a large amount of unlabeled speech. And TERA achieved strong performance on many tasks by improving upon surface features.
-
Is there code available? Data?
code: https://github.com/andi611/Self-Supervised-Speech-Pretraining-and-Representation-Learning
data: LibriSpeech and TIMIT
-
Is the idea neat? Is it counter-intuitive?
I think it’s a neat idea. Self-supervised learning has emerged as an attractive approach to leverage knowledge from a large amount of unlabeled data. This paper int