原因:由于需要将webrtc_module_audio_mixer模块进行封装,故在此进行调用分析
概况:通过内部代码可以看出webrtc_module_audio_mixer内部实现为首先将pcm进行halfgain处理,然后进行pcm对应累加,通过LimitMixedAudio将mixed的数据进行平滑处理然后进行简单的pcm扩大2倍.重点为Audioprocessing中的processStream处理.
伪代码如下:
首先需要实现Source类,每次在调用Mix时内部会调用GetAudioFrameWithInfo方法,所以在该方法内部需要进行AudioFrame的数据填充.Ssrc方法为Mix内部进行区分混音成员的标识,故不同的混音成员需要返回的该值也不一样。
class AudioMixer : public rtc::RefCountInterface {
public:
// A callback class that all mixer participants must inherit from/implement.
class Source {
public:
enum class AudioFrameInfo {
kNormal, // The samples in audio_frame are valid and should be used.
kMuted, // The samples in audio_frame should not be used, but
// should be implicitly interpreted as zero. Other
// fields in audio_frame may be read and should
// contain meaningful values.
kError, // The audio_frame will not be used.
};
// Overwrites |audio_frame|. The data_ field is overwritten with
// 10 ms of new audio (either 1 or 2 interleaved channels) at
// |sample_rate_hz|. All fields in |audio_frame| must be updated.
virtual AudioFrameInfo GetAudioFrameWithInfo(int sample_rate_hz,
AudioFrame* audio_frame) = 0;
// A way for a mixer implementation to distinguish participants.
virtual int Ssrc() const = 0;
// A way for this source to say that GetAudioFrameWithInfo called
// with this sample rate or higher will not cause quality loss.
virtual int PreferredSampleRate() const = 0;
virtual ~Source() {}
};
实现伪代码如下:可以看出在GetAudioFrameWithInfo中进行数据填充.
class AudioSrc : public webrtc::AudioMixer::Source
{
public:
virtual AudioFrameInfo GetAudioFrameWithInfo(int sample_rate_hz,
webrtc::AudioFrame* audio_frame) {
printf("GetAudioFrameWithInfo %d\n",sample_rate_hz);
audio_frame->CopyFrom(m_audioFrame);
return AudioFrameInfo::kNormal;
}
virtual int Ssrc() const
{
return m_isrc;
}
virtual int PreferredSampleRate() const
{
return DEF_SAMPLE_RATE;
}
void fillaudioframe(short *pdata,int iSample)
{
m_audioFrame.samples_per_channel_ = DEF_SAMPLE_RATE / 100;
m_audioFrame.sample_rate_hz_ = DEF_SAMPLE_RATE;
m_audioFrame.id_ = 1;
m_audioFrame.num_channels_ = 2;
memcpy(m_audioFrame.data_, pdata, iSample * sizeof(short));
m_audioFrame.vad_activity_ = webrtc::AudioFrame::kVadActive;
m_audioFrame.speech_type_ = webrtc::AudioFrame::kNormalSpeech;
}
webrtc::AudioFrame m_audioFrame;
int m_isrc;
};
内部每次的填充数据为10ms的sample.通过波形图可以看出多路混音后不会进行衰减。
工程代码地址:https://github.com/quanwstone/WebRTC
总结:通过测试代码及源码可以看出webrtc_module_audio_mixer模块为简单的pcm线性叠加,而核心重点为pcm的平滑处理(audioprocessing processstream方法)后续进行分析audioprocessing内部实现