webrtcvad python——语音端点检测

py-webrtcvad 语音端点检测

算法说明

webrtc的vad使用GMM(Gaussian Mixture Mode)对语音和噪音建模,通过相应的概率来判断语音和噪声,这种算法的优点是它是无监督的,不需要严格的训练。GMM的噪声和语音模型如下:

p(xk|z,rk)={1/sqrt(2*pi*sita^2)} * exp{ - (xk-uz) ^2/(2 * sita ^2 )} 

xk是选取的特征量,在webrtc的VAD中具体是指子带能量,rk是包括均值uz和方差sita的参数集合。z=0,代表噪声,z=1代表语音

webrtc中的vad的c代码的详细步骤如下:

  • 1.设定模式

    根据hangover、单独判决和全局判决门限将VAD检测模式分为以下4类

    0-quality mode
    1-low bitrate mode
    2-aggressive mode
    3-very aggressive mode
    
  • 2.webrtc的VAD只支持帧长10ms,20ms和30ms,为此事先要加以判断,不符合条件的返回-1

  • 3.webrtc的VAD核心计算只支持8KHz采样率,所以当输入信号采样率为32KHz或者16KHz时都要先采样到8KHz
  • 4.在8KHz采样率上分为两个步骤

    • 4.1 计算子带能量

      子带分为80~250Hz,250~500Hz,500~1000Hz,1000~2000Hz,2000~3000Hz,3000~4000Hz
      
      需要分别计算上述子带的能量feature_vector
      
    • 4.2通过高斯混合模型分别计算语音和非语音的概率,使用假设检验的方法确定信号的类型

      首先通过高斯模型计算假设检验中的H0和H1(c代码是用h0_test和h1_test表示),通过门限判决vadflag
      
      然后更新概率计算所需要的语音均值(speech_means)、噪声的均值(noise_means)、语音方差(speech_stds)和噪声方差(noise_stds)
      

实例代码

import collections
import contextlib
import sys
import wave

import webrtcvad


def read_wave(path):
    with contextlib.closing(wave.open(path, 'rb')) as wf:
        num_channels = wf.getnchannels()
        assert num_channels == 1
        sample_width = wf.getsampwidth()
        assert sample_width == 2
        sample_rate = wf.getframerate()
        assert sample_rate in (8000, 16000, 32000)
        pcm_data = wf.readframes(wf.getnframes())
        return pcm_data, sample_rate


def write_wave(path, audio, sample_rate):
    with contextlib.closing(wave.open(path, 'wb')) as wf:
        wf.setnchannels(1)
        wf.setsampwidth(2)
        wf.setframerate(sample_rate)
        wf.writeframes(audio)


class Frame(object):
    def __init__(self, bytes, timestamp, duration):
        self.bytes = bytes
        self.timestamp = timestamp
        self.duration = duration


def frame_generator(frame_duration_ms, audio, sample_rate):
    n = int(sample_rate * (frame_duration_ms / 1000.0) * 2)
    offset = 0
    timestamp = 0.0
    duration = (float(n) / sample_rate) / 2.0
    while offset + n < len(audio):
        yield Frame(audio[offset:offset + n], timestamp, duration)
        timestamp += duration
        offset += n


def vad_collector(sample_rate, frame_duration_ms,
                  padding_duration_ms, vad, frames):
    num_padding_frames = int(padding_duration_ms / frame_duration_ms)
    ring_buffer = collections.deque(maxlen=num_padding_frames)
    triggered = False
    voiced_frames = []
    for frame in frames:
        sys.stdout.write(
            '1' if vad.is_speech(frame.bytes, sample_rate) else '0')
        if not triggered:
            ring_buffer.append(frame)
            num_voiced = len([f for f in ring_buffer
                              if vad.is_speech(f.bytes, sample_rate)])
            if num_voiced > 0.9 * ring_buffer.maxlen:
                sys.stdout.write('+(%s)' % (ring_buffer[0].timestamp,))
                triggered = True
                voiced_frames.extend(ring_buffer)
                ring_buffer.clear()
        else:
            voiced_frames.append(frame)
            ring_buffer.append(frame)
            num_unvoiced = len([f for f in ring_buffer
                                if not vad.is_speech(f.bytes, sample_rate)])
            if num_unvoiced > 0.9 * ring_buffer.maxlen:
                sys.stdout.write('-(%s)' % (frame.timestamp + frame.duration))
                triggered = False
                yield b''.join([f.bytes for f in voiced_frames])
                ring_buffer.clear()
                voiced_frames = []
    if triggered:
        sys.stdout.write('-(%s)' % (frame.timestamp + frame.duration))
    sys.stdout.write('\n')
    if voiced_frames:
        yield b''.join([f.bytes for f in voiced_frames])


def main(args):
    if len(args) != 2:
        sys.stderr.write(
            'Usage: example.py <aggressiveness> <path to wav file>\n')
        sys.exit(1)
    audio, sample_rate = read_wave(args[1])
    vad = webrtcvad.Vad(int(args[0]))
    frames = frame_generator(30, audio, sample_rate)
    frames = list(frames)
    segments = vad_collector(sample_rate, 30, 300, vad, frames)
    for i, segment in enumerate(segments):
        path = 'chunk-%002d.wav' % (i,)
        print(' Writing %s' % (path,))
        write_wave(path, segment, sample_rate)


if __name__ == '__main__':
    main(sys.argv[1:])

参考:

http://blog.csdn.net/u012931018/article/details/16903027

GitHub地址:

https://github.com/wiseman/py-webrtcvad

  • 6
    点赞
  • 48
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
Python中,你可以使用一些第三方库来实现语音端点检测,例如webrtcvad。以下是一个示例代码,可以使用webrtcvad库来实现语音端点检测: ```python import webrtcvad import pyaudio # 设置语音端点检测参数 SAMPLE_RATE = 16000 # 采样率 VAD_FRAME_LENGTH = 30 # 端点检测帧长,单位ms VAD_MIN_SILENCE_LENGTH = 500 # 最小静音长度,单位ms VAD_AGGRESSIVENESS = 3 # 端点检测的敏感度,取值范围[0, 3] # 初始化webrtcvad对象 vad = webrtcvad.Vad() vad.set_mode(VAD_AGGRESSIVENESS) # 初始化PyAudio对象 p = pyaudio.PyAudio() # 打开音频流 stream = p.open(format=pyaudio.paInt16, channels=1, rate=SAMPLE_RATE, input=True, frames_per_buffer=VAD_FRAME_LENGTH * SAMPLE_RATE // 1000) print("开始语音端点检测...") # 读取音频流并进行语音端点检测 is_speech = False silent_frames = 0 while True: data = stream.read(VAD_FRAME_LENGTH * SAMPLE_RATE // 1000) is_speech_now = vad.is_speech(data, SAMPLE_RATE) if is_speech_now: silent_frames = 0 else: silent_frames += 1 if silent_frames * VAD_FRAME_LENGTH >= VAD_MIN_SILENCE_LENGTH: is_speech = False else: is_speech = True if is_speech: print("检测语音输入") # 进行语音识别等操作 else: print("未检测语音输入") # 关闭音频流和PyAudio对象 stream.stop_stream() stream.close() p.terminate() ``` 这段代码可以在Python环境下运行,它会打开麦克风开始录音,并通过webrtcvad库进行语音端点检测。如果检测语音输入,则可以进行后续的语音识别等操作。你可以根据需要修改参数来调整语音端点检测的敏感度、最小静音长度等。
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值