最近做的项目中,MP3的解码用了JLayer解码器,在播放TTS语音的时候总是出现最后一个字播一半,另一半被吞掉的问题,而对于相同的云端下发的TTS,iOS版本的App却没有这个问题,所以自然要去对比一下解码前-解码后的波形……
推荐一个Mac上的音频处理工具 - AudaCity
喏,就是这个了,看波形相当方便
官网:
对比一下解码后(PCM) - 解码前(MP3)的波形,果不其然,最后少了一截
结果出来了,一切疑点都指向凶手JLayer。先看下JLayer的使用,还是比较简单的:
1.使用BitStream包装InputStream
2.循环调用bitstream.readFrame,取出每一帧的帧头
3.调用mDecoder.decodeFrame,对每一帧进行解码,获取解码后的Buffer
4.写入AudioTrack
@Override
protected void onCreate(Bundle savedInstanceState) {
JavaLayerUtils.setContext(getApplicationContext());
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions();
final int sampleRate = 16000;
final int minBufferSize = AudioTrack.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minBufferSize,
AudioTrack.MODE_STREAM);
mDecoder = new Decoder();
Thread thread = new Thread(new Runnable() {
@Override
public void run() {
try {
InputStream in = new URL("https://weixin.wangp.org/pre-decode.mp3")
.openConnection()
.getInputStream();
Bitstream bitstream = new Bitstream(in);
final int READ_THRESHOLD = 2147483647;
int framesReaded = READ_THRESHOLD;
Header header;
for(; framesReaded-- > 0 && (header = bitstream.readFrame()) != null;) {
SampleBuffer sampleBuffer = (SampleBuffer) mDecoder.decodeFrame(header, bitstream);
short[] buffer = sampleBuffer.getBuffer();
short[] newBuffer = new short[buffer.length / 4];
System.arraycopy(buffer, 0, newBuffer, 0, buffer.length / 4);
writeToFile(newBuffer);
mAudioTrack.write(newBuffer, 0, newBuffer.length);
bitstream.closeFrame();
}
} catch (Exception e) {
e.printStackTrace();
}
}