语音活性检测器 webrtcvad

概述

WebRTC是一个免费、开放的框架/项目。
使web浏览器通过简单的JavaScript api接口实现实时通信功能。

WebRTC:An open framework for the web that enables Real-Time Communications (RTC) capabilities in the browser.

端点检测是语音信号处理中的重要一环,是各种语音任务的基础。
WebRTC是谷歌开发的VAD,是当前最有效、最先进和免费的产品之一。
webrtcvad是WebRTC语音活动检测器(VAD)的python接口,能够有效支持python2和python3,它能够区分一段语音分割中的静音帧和非静音帧

安装

使用Python编辑音频:成功安装 webrtcvad

使用脚本

1. 测试静音片段

(1)test_webrtcvad.py
用来检测静音的情况

import collections
import contextlib
import sys
import wave
 
import webrtcvad
 
 
def read_wave(path):
    with contextlib.closing(wave.open(path, 'rb')) as wf:
        num_channels = wf.getnchannels()
        assert num_channels == 1
        sample_width = wf.getsampwidth()
        assert sample_width == 2
        sample_rate = wf.getframerate()
        assert sample_rate in (8000, 16000, 32000)
        pcm_data = wf.readframes(wf.getnframes())
        return pcm_data, sample_rate
 
 
def write_wave(path, audio, sample_rate):
    with contextlib.closing(wave.open(path, 'wb')) as wf:
        wf.setnchannels(1)
        wf.setsampwidth(2)
        wf.setframerate(sample_rate)
        wf.writeframes(audio)
 
 
class Frame(object):
    def __init__(self, bytes, timestamp, duration):
        self.bytes = bytes
        self.timestamp = timestamp
        self.duration = duration
 
 
def frame_generator(frame_duration_ms, audio, sample_rate):
    n = int(sample_rate * (frame_duration_ms / 1000.0) * 2)
    offset = 0
    timestamp = 0.0
    duration = (float(n) / sample_rate) / 2.0
    while offset + n < len(audio):
        yield Frame(audio[offset:offset + n], timestamp, duration)
        timestamp += duration
        offset += n
 
 
def vad_collector(sample_rate, frame_duration_ms,
                  padding_duration_ms, vad, frames):
    num_padding_frames = int(padding_duration_ms / frame_duration_ms)
    ring_buffer = collections.deque(maxlen=num_padding_frames)
    triggered = False
    voiced_frames = []
    for frame in frames:
        sys.stdout.write(
            '1' if vad.is_speech(frame.bytes, sample_rate) else '0')
        if not triggered:
            ring_buffer.append(frame)
            num_voiced = len([f for f in ring_buffer
                              if vad.is_speech(f.bytes, sample_rate)])
            if num_voiced > 0.9 * ring_buffer.maxlen:
                sys.stdout.write('+(%s)' % (ring_buffer[0].timestamp,))
                triggered = True
                voiced_frames.extend(ring_buffer)
                ring_buffer.clear()
        else:
            voiced_frames.append(frame)
            ring_buffer.append(frame)
            num_unvoiced = len([f for f in ring_buffer
                                if not vad.is_speech(f.bytes, sample_rate)])
            if num_unvoiced > 0.9 * ring_buffer.maxlen:
                sys.stdout.write('-(%s)' % (frame.timestamp + frame.duration))
                triggered = False
                yield b''.join([f.bytes for f in voiced_frames])
                ring_buffer.clear()
                voiced_frames = []
    if triggered:
        sys.stdout.write('-(%s)' % (frame.timestamp + frame.duration))
    sys.stdout.write('\n')
    if voiced_frames:
        yield b''.join([f.bytes for f in voiced_frames])
 
 
def main(args):
    if len(args) != 2:
        sys.stderr.write(
            'Usage: example.py <aggressiveness> <path to wav file>\n')
        sys.exit(1)
    audio, sample_rate = read_wave(args[1])
    vad = webrtcvad.Vad(int(args[0]))
    frames = frame_generator(30, audio, sample_rate)
    frames = list(frames)
    segments = vad_collector(sample_rate, 30, 300, vad, frames)
    for i, segment in enumerate(segments):
        #path = 'chunk-%002d.wav' % (i,)
        print('--end')
        #write_wave(path, segment, sample_rate)
 
 
if __name__ == '__main__':
    main(sys.argv[1:])

(2)命令行运行

其中,第一个参数为敏感系数,取值0-3(0: Normal,1:low Bitrate, 2:Aggressive;3:Very Aggressive 可以根据实际更改)。越大表示越敏感,越激进,对细微的声音频段都可以识别出来;第二个参数为wav文件存放路径,目前仅支持8K,16K,32K的采样率

例如执行 python3 webrtc_vad.py 2 123456_1.wav后的结果:

1111111111+(0.0)111111111111111000011111111111111111111111111111111111111111111111111111111111111111111101111111111111111000011111111111111111111111111111111111111111111111-(4.979999999999997)
--end

2. 清理静音片段

记在:1012_vad_delete_non_voiced.ipynb

参考:

  1. 安装使用webrtcvad
  2. wiseman /py-webrtcvad

其他参考:语音端点检测(VAD)----webrtcvad

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值