The AISHELL-DMASH dataset is recorded in real smart home scenarios with two different rooms. The dataset contains 30000 hours speech data. The recording devices include one close-talking microphone and seven groups of devices at seven different positions of the room. A group of recording devices include one iPhone, one Android phone, one iPad, one microphone, and one circular microphone array with a radius of 5cm. The dataset includes 511 speakers and each speaker visits three times with a gap of 7-15 days. AISHELL-DMASH dataset was transcribed by the professional speech annotators with high QA process, and the accuracy rate of word is 98%, which could be used in research of voiceprint recognition, speech recognition, wake-up words recognition and so on.
The setup of the recording environment.
The FFSVC 2020 challenge is designed to boost the speaker verification research with special focus on far-field distributed microphone arrays under noisy conditions in real scenes. The objectives of this challenge are to: 1) benchmark the current speech verification technology under this challenging condition, 2) promote the development of new ideas and technologies in speaker verification, 3) provide an open, free, and large scale speech database to the community that exhibits the far-field characteristics in real scenes.
The FFSVC20 challenge dataset is part of the DMASH dataset. It includes the recordings from the close-talking microphone, the iPhone at 25cm distance, and three randomly selected circular microphone arrays. In FFSVC20, the training partition includes 120 speakers and the development partition includes 35 speakers. For each task, the evaluation data includes 80 speakers.