aubio库已经被SWIG包装,因此可以被Python使用。它们的许多特征包括用于音调检测/估计的几种方法,包括
YIN算法和一些谐波梳算法。
然而,如果你想要更简单的东西,我写了一些代码为俯仰估计前一段时间,你可以采取它或离开它。它不会像使用aubio中的算法一样精确,但它可能足以满足您的需求。我基本上只是对数据的FFT乘以一个窗口(在这种情况下是一个布莱克曼窗口),平方FFT值,找到具有最高值的bin,并使用最大值的对数在峰周围进行二次插值和它的两个相邻值来找到基频。我从我发现的一些文件中取的二次插值。
它在测试音色上工作得相当好,但它不如上面提到的其他方法那么健壮或准确。可以通过增加块大小(或通过减小它来减小)来提高精度。块大小应该是2的倍数以充分利用FFT。此外,我只是确定每个块没有重叠的基本音高。我使用PyAudio播放声音,同时写出估计音高。
源代码:
# Read in a WAV and find the freq's
import pyaudio
import wave
import numpy as np
chunk = 2048
# open up a wave
wf = wave.open('test-tones/440hz.wav', 'rb')
swidth = wf.getsampwidth()
RATE = wf.getframerate()
# use a Blackman window
window = np.blackman(chunk)
# open stream
p = pyaudio.PyAudio()
stream = p.open(format =
p.get_format_from_width(wf.getsampwidth()),
channels = wf.getnchannels(),
rate = RATE,
output = True)
# read some data
data = wf.readframes(chunk)
# play stream and find the frequency of each chunk
while len(data) == chunk*swidth:
# write data out to the audio stream
stream.write(data)
# unpack the data and times by the hamming window
indata = np.array(wave.struct.unpack("%dh"%(len(data)/swidth),\
data))*window
# Take the fft and square each value
fftData=abs(np.fft.rfft(indata))**2
# find the maximum
which = fftData[1:].argmax() + 1
# use quadratic interpolation around the max
if which != len(fftData)-1:
y0,y1,y2 = np.log(fftData[which-1:which+2:])
x1 = (y2 - y0) * .5 / (2 * y1 - y2 - y0)
# find the frequency and output it
thefreq = (which+x1)*RATE/chunk
print "The freq is %f Hz." % (thefreq)
else:
thefreq = which*RATE/chunk
print "The freq is %f Hz." % (thefreq)
# read some more data
data = wf.readframes(chunk)
if data:
stream.write(data)
stream.close()
p.terminate()