WEBRTC 中的基本音频处理操作

在 RTC,即实时音视频通信中,要解决的音频相关的问题,主要包括如下这些:

  • 音频数据的采集及播放。
  • 音频数据的处理。主要是对采集录制的音频数据的处理,即所谓的 3A 处理,AEC (Acoustic Echo Cancellation) 回声消除,ANS (Automatic Noise Suppression) 降噪,和 AGC (Automatic Gain Control) 自动增益控制。
  • 音效。如变声,混响,均衡等。
  • 音频数据的编码和解码。包括音频数据的编码和解码,如 AAC,OPUS,和针对弱网的处理,如 NetEQ。
  • 网络传输。一般用 RTP/RTCP 传输编码后的音频数据。
  • 整个音频处理流水线的搭建。

WebRTC 的音频处理流水线大体如下图:

除了音效之外,WebRTC 的音频处理流水线包含其它所有的部分,音频数据的采集及播放,音频数据的处理,音频数据的编码和解码,网络传输都有。

在 WebRTC 中,通过 AudioDeviceModule 完成音频数据的采集和播放。不同的操作系统平台有着不同的与音频设备通信的方式,因而不同的平台上使用各自平台特有的解决方案实现平台特有的 AudioDeviceModule。一些平台上甚至有很多套音频解决方案,如 Linux 有 pulse 和 ALSA,Android 有 framework 提供的 Java 接口、OpenSLES 和 AAudio,Windows 也有多种方案等。

WebRTC 的音频流水线只支持处理 10 ms 的数据,有些操作系统平台提供了支持采集和播放 10 ms 音频数据的接口,如 Linux,有些平台则没有,如 Android、iOS 等。AudioDeviceModule 播放和采集的数据,总会通过 AudioDeviceBuffer 拿进来或者送出去 10 ms 的音频数据。对于不支持采集和播放 10 ms 音频数据的平台,在平台的 AudioDeviceModule 和 AudioDeviceBuffer 还会插入一个 FineAudioBuffer,用于将平台的音频数据格式转换为 10 ms 的 WebRTC 能处理的音频帧。

WebRTC 的 AudioDeviceModule 连接称为 AudioTransport 的模块。对于音频数据的采集发送,AudioTransport 完成音频处理,主要即是 3A 处理。对于音频播放,这里有一个混音器,用于将接收到的多路音频做混音。回声消除主要是将录制的声音中播放的声音的部分消除掉,因而,在从 AudioTransport 中拿音频数据播放时,也会将这一部分音频数据送进 APM 中。

AudioTransport 接 AudioSendStream 和 AudioReceiveStream,在 AudioSendStream 和 AudioReceiveStream 中完成音频的编码发送和接收解码,及网络传输。

WEBRTC 的音频基本操作

在 WebRTC 的音频流水线,无论远端发送了多少路音频流,也无论远端发送的各条音频流的采样率和通道数具体是什么,都需要经过重采样,通道数转换和混音,最终转换为系统设备可接受的采样率和通道数的单路音频数据。具体来说,各条音频流需要先重采样和通道数变换转换为某个统一的采样率和通道数,然后做混音;混音之后,再经过重采样以及通道数变换,转变为最终设备可接受的音频数据。(WebRTC 中音频流水线各个节点统一用 16 位整型值表示采样点。)如下面这样:

Mixing

WebRTC 提供了一些音频操作的工具类和函数用来完成上述操作。

混音如何混?

WebRTC 提供了 AudioMixer 接口来抽象混音器,这个接口定义 (位于 webrtc/src/api/audio/audio_mixer.h) 如下:

 
  1. namespace webrtc {

  2. // WORK IN PROGRESS

  3. // This class is under development and is not yet intended for for use outside

  4. // of WebRtc/Libjingle.

  5. class AudioMixer : public rtc::RefCountInterface {

  6. public:

  7. // A callback class that all mixer participants must inherit from/implement.

  8. class Source {

  9. public:

  10. enum class AudioFrameInfo {

  11. kNormal, // The samples in audio_frame are valid and should be used.

  12. kMuted, // The samples in audio_frame should not be used, but

  13. // should be implicitly interpreted as zero. Other

  14. // fields in audio_frame may be read and should

  15. // contain meaningful values.

  16. kError, // The audio_frame will not be used.

  17. };

  18. // Overwrites |audio_frame|. The data_ field is overwritten with

  19. // 10 ms of new audio (either 1 or 2 interleaved channels) at

  20. // |sample_rate_hz|. All fields in |audio_frame| must be updated.

  21. virtual AudioFrameInfo GetAudioFrameWithInfo(int sample_rate_hz,

  22. AudioFrame* audio_frame) = 0;

  23. // A way for a mixer implementation to distinguish participants.

  24. virtual int Ssrc() const = 0;

  25. // A way for this source to say that GetAudioFrameWithInfo called

  26. // with this sample rate or higher will not cause quality loss.

  27. virtual int PreferredSampleRate() const = 0;

  28. virtual ~Source() {}

  29. };

  30. // Returns true if adding was successful. A source is never added

  31. // twice. Addition and removal can happen on different threads.

  32. virtual bool AddSource(Source* audio_source) = 0;

  33. // Removal is never attempted if a source has not been successfully

  34. // added to the mixer.

  35. virtual void RemoveSource(Source* audio_source) = 0;

  36. // Performs mixing by asking registered audio sources for audio. The

  37. // mixed result is placed in the provided AudioFrame. This method

  38. // will only be called from a single thread. The channels argument

  39. // specifies the number of channels of the mix result. The mixer

  40. // should mix at a rate that doesn't cause quality loss of the

  41. // sources' audio. The mixing rate is one of the rates listed in

  42. // AudioProcessing::NativeRate. All fields in

  43. // |audio_frame_for_mixing| must be updated.

  44. virtual void Mix(size_t number_of_channels,

  45. AudioFrame* audio_frame_for_mixing) = 0;

  46. protected:

  47. // Since the mixer is reference counted, the destructor may be

  48. // called from any thread.

  49. ~AudioMixer() override {}

  50. };

  51. } // namespace webrtc

WebRTC 的 AudioMixer 将 0 个、1 个或多个 Mixer Source 混音为特定通道数的单路音频帧。输出的音频帧的采样率,由 AudioMixer 的具体实现根据一定的规则确定。

Mixer Source 为 AudioMixer 提供特定采样率的单声道或立体声的音频帧数据,它有责任将它可以拿到的音频帧数据重采样为 AudioMixer 期待的采样率的音频数据。它还可以提供它倾向的输出采样率的信息,以帮助 AudioMixer 计算合适的输出采样率。Mixer Source 通过 Ssrc() 提供一个这一路的 Mixer Source 标识。

WebRTC 提供了一个 AudioMixer 的实现 AudioMixerImpl 类,位于 webrtc/src/modules/audio_mixer/。这个类的定义 (位于 webrtc/src/modules/audio_mixer/audio_mixer_impl.h) 如下:

 
  1. namespace webrtc {

  2. typedef std::vector<AudioFrame*> AudioFrameList;

  3. class AudioMixerImpl : public AudioMixer {

  4. public:

  5. struct SourceStatus {

  6. SourceStatus(Source* audio_source, bool is_mixed, float gain)

  7. : audio_source(audio_source), is_mixed(is_mixed), gain(gain) {}

  8. Source* audio_source = nullptr;

  9. bool is_mixed = false;

  10. float gain = 0.0f;

  11. // A frame that will be passed to audio_source->GetAudioFrameWithInfo.

  12. AudioFrame audio_frame;

  13. };

  14. using SourceStatusList = std::vector<std::unique_ptr<SourceStatus>>;

  15. // AudioProcessing only accepts 10 ms frames.

  16. static const int kFrameDurationInMs = 10;

  17. static const int kMaximumAmountOfMixedAudioSources = 3;

  18. static rtc::scoped_refptr<AudioMixerImpl> Create();

  19. static rtc::scoped_refptr<AudioMixerImpl> Create(

  20. std::unique_ptr<OutputRateCalculator> output_rate_calculator,

  21. bool use_limiter);

  22. ~AudioMixerImpl() override;

  23. // AudioMixer functions

  24. bool AddSource(Source* audio_source) override;

  25. void RemoveSource(Source* audio_source) override;

  26. void Mix(size_t number_of_channels,

  27. AudioFrame* audio_frame_for_mixing) override

  28. RTC_LOCKS_EXCLUDED(crit_);

  29. // Returns true if the source was mixed last round. Returns

  30. // false and logs an error if the source was never added to the

  31. // mixer.

  32. bool GetAudioSourceMixabilityStatusForTest(Source* audio_source) const;

  33. protected:

  34. AudioMixerImpl(std::unique_ptr<OutputRateCalculator> output_rate_calculator,

  35. bool use_limiter);

  36. private:

  37. // Set mixing frequency through OutputFrequencyCalculator.

  38. void CalculateOutputFrequency();

  39. // Get mixing frequency.

  40. int OutputFrequency() const;

  41. // Compute what audio sources to mix from audio_source_list_. Ramp

  42. // in and out. Update mixed status. Mixes up to

  43. // kMaximumAmountOfMixedAudioSources audio sources.

  44. AudioFrameList GetAudioFromSources() RTC_EXCLUSIVE_LOCKS_REQUIRED(crit_);

  45. // The critical section lock guards audio source insertion and

  46. // removal, which can be done from any thread. The race checker

  47. // checks that mixing is done sequentially.

  48. rtc::CriticalSection crit_;

  49. rtc::RaceChecker race_checker_;

  50. std::unique_ptr<OutputRateCalculator> output_rate_calculator_;

  51. // The current sample frequency and sample size when mixing.

  52. int output_frequency_ RTC_GUARDED_BY(race_checker_);

  53. size_t sample_size_ RTC_GUARDED_BY(race_checker_);

  54. // List of all audio sources. Note all lists are disjunct

  55. SourceStatusList audio_source_list_ RTC_GUARDED_BY(crit_); // May be mixed.

  56. // Component that handles actual adding of audio frames.

  57. FrameCombiner frame_combiner_ RTC_GUARDED_BY(race_checker_);

  58. RTC_DISALLOW_COPY_AND_ASSIGN(AudioMixerImpl);

  59. };

  60. } // namespace webrtc

AudioMixerImpl 类的实现 (位于 webrtc/src/modules/audio_mixer/audio_mixer_impl.cc) 如下:

 
  1. namespace webrtc {

  2. namespace {

  3. struct SourceFrame {

  4. SourceFrame(AudioMixerImpl::SourceStatus* source_status,

  5. AudioFrame* audio_frame,

  6. bool muted)

  7. : source_status(source_status), audio_frame(audio_frame), muted(muted) {

  8. RTC_DCHECK(source_status);

  9. RTC_DCHECK(audio_frame);

  10. if (!muted) {

  11. energy = AudioMixerCalculateEnergy(*audio_frame);

  12. }

  13. }

  14. SourceFrame(AudioMixerImpl::SourceStatus* source_status,

  15. AudioFrame* audio_frame,

  16. bool muted,

  17. uint32_t energy)

  18. : source_status(source_status),

  19. audio_frame(audio_frame),

  20. muted(muted),

  21. energy(energy) {

  22. RTC_DCHECK(source_status);

  23. RTC_DCHECK(audio_frame);

  24. }

  25. AudioMixerImpl::SourceStatus* source_status = nullptr;

  26. AudioFrame* audio_frame = nullptr;

  27. bool muted = true;

  28. uint32_t energy = 0;

  29. };

  30. // ShouldMixBefore(a, b) is used to select mixer sources.

  31. bool ShouldMixBefore(const SourceFrame& a, const SourceFrame& b) {

  32. if (a.muted != b.muted) {

  33. return b.muted;

  34. }

  35. const auto a_activity = a.audio_frame->vad_activity_;

  36. const auto b_activity = b.audio_frame->vad_activity_;

  37. if (a_activity != b_activity) {

  38. return a_activity == AudioFrame::kVadActive;

  39. }

  40. return a.energy > b.energy;

  41. }

  42. void RampAndUpdateGain(

  43. const std::vector<SourceFrame>& mixed_sources_and_frames) {

  44. for (const auto& source_frame : mixed_sources_and_frames) {

  45. float target_gain = source_frame.source_status->is_mixed ? 1.0f : 0.0f;

  46. Ramp(source_frame.source_status->gain, target_gain,

  47. source_frame.audio_frame);

  48. source_frame.source_status->gain = target_gain;

  49. }

  50. }

  51. AudioMixerImpl::SourceStatusList::const_iterator FindSourceInList(

  52. AudioMixerImpl::Source const* audio_source,

  53. AudioMixerImpl::SourceStatusList const* audio_source_list) {

  54. return std::find_if(

  55. audio_source_list->begin(), audio_source_list->end(),

  56. [audio_source](const std::unique_ptr<AudioMixerImpl::SourceStatus>& p) {

  57. return p->audio_source == audio_source;

  58. });

  59. }

  60. } // namespace

  61. AudioMixerImpl::AudioMixerImpl(

  62. std::unique_ptr<OutputRateCalculator> output_rate_calculator,

  63. bool use_limiter)

  64. : output_rate_calculator_(std::move(output_rate_calculator)),

  65. output_frequency_(0),

  66. sample_size_(0),

  67. audio_source_list_(),

  68. frame_combiner_(use_limiter) {}

  69. AudioMixerImpl::~AudioMixerImpl() {}

  70. rtc::scoped_refptr<AudioMixerImpl> AudioMixerImpl::Create() {

  71. return Create(std::unique_ptr<DefaultOutputRateCalculator>(

  72. new DefaultOutputRateCalculator()),

  73. true);

  74. }

  75. rtc::scoped_refptr<AudioMixerImpl> AudioMixerImpl::Create(

  76. std::unique_ptr<OutputRateCalculator> output_rate_calculator,

  77. bool use_limiter) {

  78. return rtc::scoped_refptr<AudioMixerImpl>(

  79. new rtc::RefCountedObject<AudioMixerImpl>(

  80. std::move(output_rate_calculator), use_limiter));

  81. }

  82. void AudioMixerImpl::Mix(size_t number_of_channels,

  83. AudioFrame* audio_frame_for_mixing) {

  84. RTC_DCHECK(number_of_channels >= 1);

  85. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  86. CalculateOutputFrequency();

  87. {

  88. rtc::CritScope lock(&crit_);

  89. const size_t number_of_streams = audio_source_list_.size();

  90. frame_combiner_.Combine(GetAudioFromSources(), number_of_channels,

  91. OutputFrequency(), number_of_streams,

  92. audio_frame_for_mixing);

  93. }

  94. return;

  95. }

  96. void AudioMixerImpl::CalculateOutputFrequency() {

  97. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  98. rtc::CritScope lock(&crit_);

  99. std::vector<int> preferred_rates;

  100. std::transform(audio_source_list_.begin(), audio_source_list_.end(),

  101. std::back_inserter(preferred_rates),

  102. [&](std::unique_ptr<SourceStatus>& a) {

  103. return a->audio_source->PreferredSampleRate();

  104. });

  105. output_frequency_ =

  106. output_rate_calculator_->CalculateOutputRate(preferred_rates);

  107. sample_size_ = (output_frequency_ * kFrameDurationInMs) / 1000;

  108. }

  109. int AudioMixerImpl::OutputFrequency() const {

  110. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  111. return output_frequency_;

  112. }

  113. bool AudioMixerImpl::AddSource(Source* audio_source) {

  114. RTC_DCHECK(audio_source);

  115. rtc::CritScope lock(&crit_);

  116. RTC_DCHECK(FindSourceInList(audio_source, &audio_source_list_) ==

  117. audio_source_list_.end())

  118. << "Source already added to mixer";

  119. audio_source_list_.emplace_back(new SourceStatus(audio_source, false, 0));

  120. return true;

  121. }

  122. void AudioMixerImpl::RemoveSource(Source* audio_source) {

  123. RTC_DCHECK(audio_source);

  124. rtc::CritScope lock(&crit_);

  125. const auto iter = FindSourceInList(audio_source, &audio_source_list_);

  126. RTC_DCHECK(iter != audio_source_list_.end()) << "Source not present in mixer";

  127. audio_source_list_.erase(iter);

  128. }

  129. AudioFrameList AudioMixerImpl::GetAudioFromSources() {

  130. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  131. AudioFrameList result;

  132. std::vector<SourceFrame> audio_source_mixing_data_list;

  133. std::vector<SourceFrame> ramp_list;

  134. // Get audio from the audio sources and put it in the SourceFrame vector.

  135. for (auto& source_and_status : audio_source_list_) {

  136. const auto audio_frame_info =

  137. source_and_status->audio_source->GetAudioFrameWithInfo(

  138. OutputFrequency(), &source_and_status->audio_frame);

  139. if (audio_frame_info == Source::AudioFrameInfo::kError) {

  140. RTC_LOG_F(LS_WARNING) << "failed to GetAudioFrameWithInfo() from source";

  141. continue;

  142. }

  143. audio_source_mixing_data_list.emplace_back(

  144. source_and_status.get(), &source_and_status->audio_frame,

  145. audio_frame_info == Source::AudioFrameInfo::kMuted);

  146. }

  147. // Sort frames by sorting function.

  148. std::sort(audio_source_mixing_data_list.begin(),

  149. audio_source_mixing_data_list.end(), ShouldMixBefore);

  150. int max_audio_frame_counter = kMaximumAmountOfMixedAudioSources;

  151. // Go through list in order and put unmuted frames in result list.

  152. for (const auto& p : audio_source_mixing_data_list) {

  153. // Filter muted.

  154. if (p.muted) {

  155. p.source_status->is_mixed = false;

  156. continue;

  157. }

  158. // Add frame to result vector for mixing.

  159. bool is_mixed = false;

  160. if (max_audio_frame_counter > 0) {

  161. --max_audio_frame_counter;

  162. result.push_back(p.audio_frame);

  163. ramp_list.emplace_back(p.source_status, p.audio_frame, false, -1);

  164. is_mixed = true;

  165. }

  166. p.source_status->is_mixed = is_mixed;

  167. }

  168. RampAndUpdateGain(ramp_list);

  169. return result;

  170. }

  171. bool AudioMixerImpl::GetAudioSourceMixabilityStatusForTest(

  172. AudioMixerImpl::Source* audio_source) const {

  173. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  174. rtc::CritScope lock(&crit_);

  175. const auto iter = FindSourceInList(audio_source, &audio_source_list_);

  176. if (iter != audio_source_list_.end()) {

  177. return (*iter)->is_mixed;

  178. }

  179. RTC_LOG(LS_ERROR) << "Audio source unknown";

  180. return false;

  181. }

  182. } // namespace webrtc

不难看出,AudioMixerImpl 的 AddSource(Source* audio_source) 和 RemoveSource(Source* audio_source) 都只是普通的容器操作,但它强制不能添加已经添加的 Mixer Source,也不能移除不存在的 Mixer Source。整个类的中心无疑是 Mix(size_t number_of_channels, AudioFrame* audio_frame_for_mixing) 了。

 
  1. void AudioMixerImpl::Mix(size_t number_of_channels,

  2. AudioFrame* audio_frame_for_mixing) {

  3. RTC_DCHECK(number_of_channels >= 1);

  4. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  5. CalculateOutputFrequency();

  6. {

  7. rtc::CritScope lock(&crit_);

  8. const size_t number_of_streams = audio_source_list_.size();

  9. frame_combiner_.Combine(GetAudioFromSources(), number_of_channels,

  10. OutputFrequency(), number_of_streams,

  11. audio_frame_for_mixing);

  12. }

  13. return;

  14. }

AudioMixerImpl::Mix() 混音过程大致如下:

  1. 计算输出音频帧的采样率。这也是这个接口不需要指定输出采样率的原因,AudioMixer 的实现内部会自己算,通常是根据各个 Mixer Source 的 Preferred 采样率算。
  2. 从所有的 Mixer Source 中获得一个特定采样率的音频帧的列表。AudioMixer 并不是简单的从所有的 Mixer Source 中各获得一个音频帧并构造一个列表就完事,它还会对这些音频帧做一些简单变换和取舍。
  3. 通过 FrameCombiner 对不同的音频帧做混音。
计算输出音频采样率

计算输出音频采样率的过程如下:

 
  1. void AudioMixerImpl::CalculateOutputFrequency() {

  2. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  3. rtc::CritScope lock(&crit_);

  4. std::vector<int> preferred_rates;

  5. std::transform(audio_source_list_.begin(), audio_source_list_.end(),

  6. std::back_inserter(preferred_rates),

  7. [&](std::unique_ptr<SourceStatus>& a) {

  8. return a->audio_source->PreferredSampleRate();

  9. });

  10. output_frequency_ =

  11. output_rate_calculator_->CalculateOutputRate(preferred_rates);

  12. sample_size_ = (output_frequency_ * kFrameDurationInMs) / 1000;

  13. }

AudioMixerImpl 首先获得各个 Mixer Source 的 Preferred 的采样率并构造一个列表,然后通过 OutputRateCalculator 接口 (位于 webrtc/modules/audio_mixer/output_rate_calculator.h) 计算输出采样率:

 
  1. class OutputRateCalculator {

  2. public:

  3. virtual int CalculateOutputRate(

  4. const std::vector<int>& preferred_sample_rates) = 0;

  5. virtual ~OutputRateCalculator() {}

  6. };

WebRTC 提供了一个默认的 OutputRateCalculator 接口实现 DefaultOutputRateCalculator,类定义 (webrtc/src/modules/audio_mixer/default_output_rate_calculator.h) 如下:

 
  1. namespace webrtc {

  2. class DefaultOutputRateCalculator : public OutputRateCalculator {

  3. public:

  4. static const int kDefaultFrequency = 48000;

  5. // Produces the least native rate greater or equal to the preferred

  6. // sample rates. A native rate is one in

  7. // AudioProcessing::NativeRate. If |preferred_sample_rates| is

  8. // empty, returns |kDefaultFrequency|.

  9. int CalculateOutputRate(

  10. const std::vector<int>& preferred_sample_rates) override;

  11. ~DefaultOutputRateCalculator() override {}

  12. };

  13. } // namespace webrtc

这个类的定义很简单。默认的 AudioMixer 输出采样率的计算方法如下:

 
  1. namespace webrtc {

  2. int DefaultOutputRateCalculator::CalculateOutputRate(

  3. const std::vector<int>& preferred_sample_rates) {

  4. if (preferred_sample_rates.empty()) {

  5. return DefaultOutputRateCalculator::kDefaultFrequency;

  6. }

  7. using NativeRate = AudioProcessing::NativeRate;

  8. const int maximal_frequency = *std::max_element(

  9. preferred_sample_rates.begin(), preferred_sample_rates.end());

  10. RTC_DCHECK_LE(NativeRate::kSampleRate8kHz, maximal_frequency);

  11. RTC_DCHECK_GE(NativeRate::kSampleRate48kHz, maximal_frequency);

  12. static constexpr NativeRate native_rates[] = {

  13. NativeRate::kSampleRate8kHz, NativeRate::kSampleRate16kHz,

  14. NativeRate::kSampleRate32kHz, NativeRate::kSampleRate48kHz};

  15. const auto* rounded_up_index = std::lower_bound(

  16. std::begin(native_rates), std::end(native_rates), maximal_frequency);

  17. RTC_DCHECK(rounded_up_index != std::end(native_rates));

  18. return *rounded_up_index;

  19. }

  20. } // namespace webrtc

对于音频,WebRTC 内部支持一些标准的采样率,即 8K,16K,32K 和 48K。DefaultOutputRateCalculator 获得传入的采样率列表中最大的那个,并在标准采样率列表中找到最小的那个大于等于前面获得的最大采样率的采样率。也就是说,如果 AudioMixerImpl 的所有 Mixer Source 的 Preferred 采样率都大于 48K,计算会失败。

获得音频帧列表

AudioMixerImpl::GetAudioFromSources() 获得音频帧列表:

 
  1. AudioFrameList AudioMixerImpl::GetAudioFromSources() {

  2. RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);

  3. AudioFrameList result;

  4. std::vector<SourceFrame> audio_source_mixing_data_list;

  5. std::vector<SourceFrame> ramp_list;

  6. // Get audio from the audio sources and put it in the SourceFrame vector.

  7. for (auto& source_and_status : audio_source_list_) {

  8. const auto audio_frame_info =

  9. source_and_status->audio_source->GetAudioFrameWithInfo(

  10. OutputFrequency(), &source_and_status->audio_frame);

  11. if (audio_frame_info == Source::AudioFrameInfo::kError) {

  12. RTC_LOG_F(LS_WARNING) << "failed to GetAudioFrameWithInfo() from source";

  13. continue;

  14. }

  15. audio_source_mixing_data_list.emplace_back(

  16. source_and_status.get(), &source_and_status->audio_frame,

  17. audio_frame_info == Source::AudioFrameInfo::kMuted);

  18. }

  19. // Sort frames by sorting function.

  20. std::sort(audio_source_mixing_data_list.begin(),

  21. audio_source_mixing_data_list.end(), ShouldMixBefore);

  22. int max_audio_frame_counter = kMaximumAmountOfMixedAudioSources;

  23. // Go through list in order and put unmuted frames in result list.

  24. for (const auto& p : audio_source_mixing_data_list) {

  25. // Filter muted.

  26. if (p.muted) {

  27. p.source_status->is_mixed = false;

  28. continue;

  29. }

  30. // Add frame to result vector for mixing.

  31. bool is_mixed = false;

  32. if (max_audio_frame_counter > 0) {

  33. --max_audio_frame_counter;

  34. result.push_back(p.audio_frame);

  35. ramp_list.emplace_back(p.source_status, p.audio_frame, false, -1);

  36. is_mixed = true;

  37. }

  38. p.source_status->is_mixed = is_mixed;

  39. }

  40. RampAndUpdateGain(ramp_list);

  41. return result;

  42. }

  1. AudioMixerImpl::GetAudioFromSources() 从各个 Mixer Source 中获得音频帧,并构造 SourceFrame 的列表。注意 SourceFrame 的构造函数会调用 AudioMixerCalculateEnergy() (位于 webrtc/src/modules/audio_mixer/audio_frame_manipulator.cc) 计算音频帧的 energy。具体的计算方法如下:
 
  1. uint32_t AudioMixerCalculateEnergy(const AudioFrame& audio_frame) {

  2. if (audio_frame.muted()) {

  3. return 0;

  4. }

  5. uint32_t energy = 0;

  6. const int16_t* frame_data = audio_frame.data();

  7. for (size_t position = 0;

  8. position < audio_frame.samples_per_channel_ * audio_frame.num_channels_;

  9. position++) {

  10. // TODO(aleloi): This can overflow. Convert to floats.

  11. energy += frame_data[position] * frame_data[position];

  12. }

  13. return energy;

  14. }

计算所有采样点数值的平方和。

  1. 然后对获得的音频帧排序,排序的逻辑如下:
 
  1. bool ShouldMixBefore(const SourceFrame& a, const SourceFrame& b) {

  2. if (a.muted != b.muted) {

  3. return b.muted;

  4. }

  5. const auto a_activity = a.audio_frame->vad_activity_;

  6. const auto b_activity = b.audio_frame->vad_activity_;

  7. if (a_activity != b_activity) {

  8. return a_activity == AudioFrame::kVadActive;

  9. }

  10. return a.energy > b.energy;

  11. }

  1. 从排序之后的音频帧列表中选取最多 3 个信号最强的音频帧返回。

  2. 对选择的音频帧信号 Ramp 及更新增益:

 
  1. void RampAndUpdateGain(

  2. const std::vector<SourceFrame>& mixed_sources_and_frames) {

  3. for (const auto& source_frame : mixed_sources_and_frames) {

  4. float target_gain = source_frame.source_status->is_mixed ? 1.0f : 0.0f;

  5. Ramp(source_frame.source_status->gain, target_gain,

  6. source_frame.audio_frame);

  7. source_frame.source_status->gain = target_gain;

  8. }

  9. }

Ramp() 的执行过程 (位于 webrtc/src/modules/audio_mixer/audio_frame_manipulator.cc) 如下:

 
  1. void Ramp(float start_gain, float target_gain, AudioFrame* audio_frame) {

  2. RTC_DCHECK(audio_frame);

  3. RTC_DCHECK_GE(start_gain, 0.0f);

  4. RTC_DCHECK_GE(target_gain, 0.0f);

  5. if (start_gain == target_gain || audio_frame->muted()) {

  6. return;

  7. }

  8. size_t samples = audio_frame->samples_per_channel_;

  9. RTC_DCHECK_LT(0, samples);

  10. float increment = (target_gain - start_gain) / samples;

  11. float gain = start_gain;

  12. int16_t* frame_data = audio_frame->mutable_data();

  13. for (size_t i = 0; i < samples; ++i) {

  14. // If the audio is interleaved of several channels, we want to

  15. // apply the same gain change to the ith sample of every channel.

  16. for (size_t ch = 0; ch < audio_frame->num_channels_; ++ch) {

  17. frame_data[audio_frame->num_channels_ * i + ch] *= gain;

  18. }

  19. gain += increment;

  20. }

  21. }

之所以要执行这一步,是因为在混音不同音频帧的特定时刻,同一个音频流的音频帧可能会由于它的音频帧的信号相对强度,被纳入混音或被排除混音,这一步的操作可以使特定某一路音频听上去变化更平滑。

FRAMECOMBINER

FrameCombiner 是混音的最终执行者:

 
  1. void FrameCombiner::Combine(const std::vector<AudioFrame*>& mix_list,

  2. size_t number_of_channels,

  3. int sample_rate,

  4. size_t number_of_streams,

  5. AudioFrame* audio_frame_for_mixing) {

  6. RTC_DCHECK(audio_frame_for_mixing);

  7. LogMixingStats(mix_list, sample_rate, number_of_streams);

  8. SetAudioFrameFields(mix_list, number_of_channels, sample_rate,

  9. number_of_streams, audio_frame_for_mixing);

  10. const size_t samples_per_channel = static_cast<size_t>(

  11. (sample_rate * webrtc::AudioMixerImpl::kFrameDurationInMs) / 1000);

  12. for (const auto* frame : mix_list) {

  13. RTC_DCHECK_EQ(samples_per_channel, frame->samples_per_channel_);

  14. RTC_DCHECK_EQ(sample_rate, frame->sample_rate_hz_);

  15. }

  16. // The 'num_channels_' field of frames in 'mix_list' could be

  17. // different from 'number_of_channels'.

  18. for (auto* frame : mix_list) {

  19. RemixFrame(number_of_channels, frame);

  20. }

  21. if (number_of_streams <= 1) {

  22. MixFewFramesWithNoLimiter(mix_list, audio_frame_for_mixing);

  23. return;

  24. }

  25. std::array<OneChannelBuffer, kMaximumAmountOfChannels> mixing_buffer =

  26. MixToFloatFrame(mix_list, samples_per_channel, number_of_channels);

  27. // Put float data in an AudioFrameView.

  28. std::array<float*, kMaximumAmountOfChannels> channel_pointers{};

  29. for (size_t i = 0; i < number_of_channels; ++i) {

  30. channel_pointers[i] = &mixing_buffer[i][0];

  31. }

  32. AudioFrameView<float> mixing_buffer_view(

  33. &channel_pointers[0], number_of_channels, samples_per_channel);

  34. if (use_limiter_) {

  35. RunLimiter(mixing_buffer_view, &limiter_);

  36. }

  37. InterleaveToAudioFrame(mixing_buffer_view, audio_frame_for_mixing);

  38. }

  1. FrameCombiner 把各个音频帧的数据的通道数都转换为目标通道数:
 
  1. void RemixFrame(size_t target_number_of_channels, AudioFrame* frame) {

  2. RTC_DCHECK_GE(target_number_of_channels, 1);

  3. RTC_DCHECK_LE(target_number_of_channels, 2);

  4. if (frame->num_channels_ == 1 && target_number_of_channels == 2) {

  5. AudioFrameOperations::MonoToStereo(frame);

  6. } else if (frame->num_channels_ == 2 && target_number_of_channels == 1) {

  7. AudioFrameOperations::StereoToMono(frame);

  8. }

  9. }

  1. 执行混音
 
  1. std::array<OneChannelBuffer, kMaximumAmountOfChannels> MixToFloatFrame(

  2. const std::vector<AudioFrame*>& mix_list,

  3. size_t samples_per_channel,

  4. size_t number_of_channels) {

  5. // Convert to FloatS16 and mix.

  6. using OneChannelBuffer = std::array<float, kMaximumChannelSize>;

  7. std::array<OneChannelBuffer, kMaximumAmountOfChannels> mixing_buffer{};

  8. for (size_t i = 0; i < mix_list.size(); ++i) {

  9. const AudioFrame* const frame = mix_list[i];

  10. for (size_t j = 0; j < number_of_channels; ++j) {

  11. for (size_t k = 0; k < samples_per_channel; ++k) {

  12. mixing_buffer[j][k] += frame->data()[number_of_channels * k + j];

  13. }

  14. }

  15. }

  16. return mixing_buffer;

  17. }

可以看到,所谓混音,只是把不同音频流的音频帧的样本点数据相加。

  1. RunLimiter
    这一步会通过 AGC,对音频信号做处理。
 
  1. void RunLimiter(AudioFrameView<float> mixing_buffer_view,

  2. FixedGainController* limiter) {

  3. const size_t sample_rate = mixing_buffer_view.samples_per_channel() * 1000 /

  4. AudioMixerImpl::kFrameDurationInMs;

  5. limiter->SetSampleRate(sample_rate);

  6. limiter->Process(mixing_buffer_view);

  7. }

  1. 数据格式转换
 
  1. // Both interleaves and rounds.

  2. void InterleaveToAudioFrame(AudioFrameView<const float> mixing_buffer_view,

  3. AudioFrame* audio_frame_for_mixing) {

  4. const size_t number_of_channels = mixing_buffer_view.num_channels();

  5. const size_t samples_per_channel = mixing_buffer_view.samples_per_channel();

  6. // Put data in the result frame.

  7. for (size_t i = 0; i < number_of_channels; ++i) {

  8. for (size_t j = 0; j < samples_per_channel; ++j) {

  9. audio_frame_for_mixing->mutable_data()[number_of_channels * j + i] =

  10. FloatS16ToS16(mixing_buffer_view.channel(i)[j]);

  11. }

  12. }

  13. }

经过前面的处理,得到浮点型的音频采样数据。这一步将浮点型的数据转换为需要的 16 位整型数据。

至此混音结束。

结论:混音就是把各个音频流的采样点数据相加。

通道数转换如何完成?

WebRTC 提供了一些 Utility 函数用于完成音频帧单通道、立体声及四通道之间的相互转换,位于 webrtc/audio/utility/audio_frame_operations.cc。通过这些函数的实现,我们可以看到音频帧的通道数转换具体是什么含义。

单通道转立体声:

 
  1. void AudioFrameOperations::MonoToStereo(const int16_t* src_audio,

  2. size_t samples_per_channel,

  3. int16_t* dst_audio) {

  4. for (size_t i = 0; i < samples_per_channel; i++) {

  5. dst_audio[2 * i] = src_audio[i];

  6. dst_audio[2 * i + 1] = src_audio[i];

  7. }

  8. }

  9. int AudioFrameOperations::MonoToStereo(AudioFrame* frame) {

  10. if (frame->num_channels_ != 1) {

  11. return -1;

  12. }

  13. if ((frame->samples_per_channel_ * 2) >= AudioFrame::kMaxDataSizeSamples) {

  14. // Not enough memory to expand from mono to stereo.

  15. return -1;

  16. }

  17. if (!frame->muted()) {

  18. // TODO(yujo): this operation can be done in place.

  19. int16_t data_copy[AudioFrame::kMaxDataSizeSamples];

  20. memcpy(data_copy, frame->data(),

  21. sizeof(int16_t) * frame->samples_per_channel_);

  22. MonoToStereo(data_copy, frame->samples_per_channel_, frame->mutable_data());

  23. }

  24. frame->num_channels_ = 2;

  25. return 0;

  26. }

单通道转立体声,也就是把一个通道的数据复制一份,让两个声道播放相同的音频数据。

立体声转单声道:

 
  1. void AudioFrameOperations::StereoToMono(const int16_t* src_audio,

  2. size_t samples_per_channel,

  3. int16_t* dst_audio) {

  4. for (size_t i = 0; i < samples_per_channel; i++) {

  5. dst_audio[i] =

  6. (static_cast<int32_t>(src_audio[2 * i]) + src_audio[2 * i + 1]) >> 1;

  7. }

  8. }

  9. int AudioFrameOperations::StereoToMono(AudioFrame* frame) {

  10. if (frame->num_channels_ != 2) {

  11. return -1;

  12. }

  13. RTC_DCHECK_LE(frame->samples_per_channel_ * 2,

  14. AudioFrame::kMaxDataSizeSamples);

  15. if (!frame->muted()) {

  16. StereoToMono(frame->data(), frame->samples_per_channel_,

  17. frame->mutable_data());

  18. }

  19. frame->num_channels_ = 1;

  20. return 0;

  21. }

立体声转单声道是把两个声道的数据相加除以 2,得到一个通道的音频数据。

四声道转立体声:

 
  1. void AudioFrameOperations::QuadToStereo(const int16_t* src_audio,

  2. size_t samples_per_channel,

  3. int16_t* dst_audio) {

  4. for (size_t i = 0; i < samples_per_channel; i++) {

  5. dst_audio[i * 2] =

  6. (static_cast<int32_t>(src_audio[4 * i]) + src_audio[4 * i + 1]) >> 1;

  7. dst_audio[i * 2 + 1] =

  8. (static_cast<int32_t>(src_audio[4 * i + 2]) + src_audio[4 * i + 3]) >>

  9. 1;

  10. }

  11. }

  12. int AudioFrameOperations::QuadToStereo(AudioFrame* frame) {

  13. if (frame->num_channels_ != 4) {

  14. return -1;

  15. }

  16. RTC_DCHECK_LE(frame->samples_per_channel_ * 4,

  17. AudioFrame::kMaxDataSizeSamples);

  18. if (!frame->muted()) {

  19. QuadToStereo(frame->data(), frame->samples_per_channel_,

  20. frame->mutable_data());

  21. }

  22. frame->num_channels_ = 2;

  23. return 0;

  24. }

四声道转立体声,是把 1、2 两个声道的数据相加除以 2 作为一个声道的数据,把 3、4 两个声道的数据相加除以 2 作为另一个声道的数据。

四声道转单声道:

 
  1. void AudioFrameOperations::QuadToMono(const int16_t* src_audio,

  2. size_t samples_per_channel,

  3. int16_t* dst_audio) {

  4. for (size_t i = 0; i < samples_per_channel; i++) {

  5. dst_audio[i] =

  6. (static_cast<int32_t>(src_audio[4 * i]) + src_audio[4 * i + 1] +

  7. src_audio[4 * i + 2] + src_audio[4 * i + 3]) >>

  8. 2;

  9. }

  10. }

  11. int AudioFrameOperations::QuadToMono(AudioFrame* frame) {

  12. if (frame->num_channels_ != 4) {

  13. return -1;

  14. }

  15. RTC_DCHECK_LE(frame->samples_per_channel_ * 4,

  16. AudioFrame::kMaxDataSizeSamples);

  17. if (!frame->muted()) {

  18. QuadToMono(frame->data(), frame->samples_per_channel_,

  19. frame->mutable_data());

  20. }

  21. frame->num_channels_ = 1;

  22. return 0;

  23. }

四声道转单声道是把四个声道的数据相加除以四,得到一个声道的数据。

WebRTC 提供的其它音频数据操作具体可以参考 WebRTC 的头文件。

重采样

重采样可已将某个采样率的音频数据转换为另一个采样率的分辨率。WebRTC 中的重采样主要通过 PushResampler 、 PushSincResampler 和 SincResampler 等几个组件完成。如 webrtc/src/audio/audio_transport_impl.cc 中的 Resample()

 
  1. // Resample audio in |frame| to given sample rate preserving the

  2. // channel count and place the result in |destination|.

  3. int Resample(const AudioFrame& frame, const int destination_sample_rate,

  4. PushResampler<int16_t>* resampler, int16_t* destination) {

  5. const int number_of_channels = static_cast<int>(frame.num_channels_);

  6. const int target_number_of_samples_per_channel =

  7. destination_sample_rate / 100;

  8. resampler->InitializeIfNeeded(frame.sample_rate_hz_, destination_sample_rate,

  9. number_of_channels);

  10. // TODO(yujo): make resampler take an AudioFrame, and add special case

  11. // handling of muted frames.

  12. return resampler->Resample(

  13. frame.data(), frame.samples_per_channel_ * number_of_channels,

  14. destination, number_of_channels * target_number_of_samples_per_channel);

  15. }

PushResampler 是一个模板类,其接口比较简单,类的具体定义 (位于 webrtc/src/common_audio/resampler/include/push_resampler.h) 如下:

 
  1. namespace webrtc {

  2. class PushSincResampler;

  3. // Wraps PushSincResampler to provide stereo support.

  4. // TODO(ajm): add support for an arbitrary number of channels.

  5. template <typename T>

  6. class PushResampler {

  7. public:

  8. PushResampler();

  9. virtual ~PushResampler();

  10. // Must be called whenever the parameters change. Free to be called at any

  11. // time as it is a no-op if parameters have not changed since the last call.

  12. int InitializeIfNeeded(int src_sample_rate_hz,

  13. int dst_sample_rate_hz,

  14. size_t num_channels);

  15. // Returns the total number of samples provided in destination (e.g. 32 kHz,

  16. // 2 channel audio gives 640 samples).

  17. int Resample(const T* src, size_t src_length, T* dst, size_t dst_capacity);

  18. private:

  19. std::unique_ptr<PushSincResampler> sinc_resampler_;

  20. std::unique_ptr<PushSincResampler> sinc_resampler_right_;

  21. int src_sample_rate_hz_;

  22. int dst_sample_rate_hz_;

  23. size_t num_channels_;

  24. std::unique_ptr<T[]> src_left_;

  25. std::unique_ptr<T[]> src_right_;

  26. std::unique_ptr<T[]> dst_left_;

  27. std::unique_ptr<T[]> dst_right_;

  28. };

  29. } // namespace webrtc

这个类的实现 (位于 webrtc/src/common_audio/resampler/push_resampler.cc) 如下:

 
  1. template <typename T>

  2. PushResampler<T>::PushResampler()

  3. : src_sample_rate_hz_(0), dst_sample_rate_hz_(0), num_channels_(0) {}

  4. template <typename T>

  5. PushResampler<T>::~PushResampler() {}

  6. template <typename T>

  7. int PushResampler<T>::InitializeIfNeeded(int src_sample_rate_hz,

  8. int dst_sample_rate_hz,

  9. size_t num_channels) {

  10. CheckValidInitParams(src_sample_rate_hz, dst_sample_rate_hz, num_channels);

  11. if (src_sample_rate_hz == src_sample_rate_hz_ &&

  12. dst_sample_rate_hz == dst_sample_rate_hz_ &&

  13. num_channels == num_channels_) {

  14. // No-op if settings haven't changed.

  15. return 0;

  16. }

  17. if (src_sample_rate_hz <= 0 || dst_sample_rate_hz <= 0 || num_channels <= 0 ||

  18. num_channels > 2) {

  19. return -1;

  20. }

  21. src_sample_rate_hz_ = src_sample_rate_hz;

  22. dst_sample_rate_hz_ = dst_sample_rate_hz;

  23. num_channels_ = num_channels;

  24. const size_t src_size_10ms_mono =

  25. static_cast<size_t>(src_sample_rate_hz / 100);

  26. const size_t dst_size_10ms_mono =

  27. static_cast<size_t>(dst_sample_rate_hz / 100);

  28. sinc_resampler_.reset(

  29. new PushSincResampler(src_size_10ms_mono, dst_size_10ms_mono));

  30. if (num_channels_ == 2) {

  31. src_left_.reset(new T[src_size_10ms_mono]);

  32. src_right_.reset(new T[src_size_10ms_mono]);

  33. dst_left_.reset(new T[dst_size_10ms_mono]);

  34. dst_right_.reset(new T[dst_size_10ms_mono]);

  35. sinc_resampler_right_.reset(

  36. new PushSincResampler(src_size_10ms_mono, dst_size_10ms_mono));

  37. }

  38. return 0;

  39. }

  40. template <typename T>

  41. int PushResampler<T>::Resample(const T* src,

  42. size_t src_length,

  43. T* dst,

  44. size_t dst_capacity) {

  45. CheckExpectedBufferSizes(src_length, dst_capacity, num_channels_,

  46. src_sample_rate_hz_, dst_sample_rate_hz_);

  47. if (src_sample_rate_hz_ == dst_sample_rate_hz_) {

  48. // The old resampler provides this memcpy facility in the case of matching

  49. // sample rates, so reproduce it here for the sinc resampler.

  50. memcpy(dst, src, src_length * sizeof(T));

  51. return static_cast<int>(src_length);

  52. }

  53. if (num_channels_ == 2) {

  54. const size_t src_length_mono = src_length / num_channels_;

  55. const size_t dst_capacity_mono = dst_capacity / num_channels_;

  56. T* deinterleaved[] = {src_left_.get(), src_right_.get()};

  57. Deinterleave(src, src_length_mono, num_channels_, deinterleaved);

  58. size_t dst_length_mono = sinc_resampler_->Resample(

  59. src_left_.get(), src_length_mono, dst_left_.get(), dst_capacity_mono);

  60. sinc_resampler_right_->Resample(src_right_.get(), src_length_mono,

  61. dst_right_.get(), dst_capacity_mono);

  62. deinterleaved[0] = dst_left_.get();

  63. deinterleaved[1] = dst_right_.get();

  64. Interleave(deinterleaved, dst_length_mono, num_channels_, dst);

  65. return static_cast<int>(dst_length_mono * num_channels_);

  66. } else {

  67. return static_cast<int>(

  68. sinc_resampler_->Resample(src, src_length, dst, dst_capacity));

  69. }

  70. }

  71. // Explictly generate required instantiations.

  72. template class PushResampler<int16_t>;

  73. template class PushResampler<float>;

PushResampler<T>::InitializeIfNeeded() 函数根据源和目标采样率初始化了一些缓冲区和必要的 PushSincResampler

PushResampler<T>::Resample() 函数中,通过 PushSincResampler 完成重采样。PushSincResampler 执行单个通道的音频数据的重采样。对于立体声的音频数据,PushResampler<T>::Resample() 函数会先将音频帧的数据,拆开成两个单通道的音频帧数据,然后分别做重采样,最后再合起来。

webrtc/src/common_audio/include/audio_util.h 中将立体声的音频数据拆开为两个单通道的数据,和将两个单通道的音频数据合并为立体声音频帧数据的具体实现如下:

 
  1. // Deinterleave audio from |interleaved| to the channel buffers pointed to

  2. // by |deinterleaved|. There must be sufficient space allocated in the

  3. // |deinterleaved| buffers (|num_channel| buffers with |samples_per_channel|

  4. // per buffer).

  5. template <typename T>

  6. void Deinterleave(const T* interleaved,

  7. size_t samples_per_channel,

  8. size_t num_channels,

  9. T* const* deinterleaved) {

  10. for (size_t i = 0; i < num_channels; ++i) {

  11. T* channel = deinterleaved[i];

  12. size_t interleaved_idx = i;

  13. for (size_t j = 0; j < samples_per_channel; ++j) {

  14. channel[j] = interleaved[interleaved_idx];

  15. interleaved_idx += num_channels;

  16. }

  17. }

  18. }

  19. // Interleave audio from the channel buffers pointed to by |deinterleaved| to

  20. // |interleaved|. There must be sufficient space allocated in |interleaved|

  21. // (|samples_per_channel| * |num_channels|).

  22. template <typename T>

  23. void Interleave(const T* const* deinterleaved,

  24. size_t samples_per_channel,

  25. size_t num_channels,

  26. T* interleaved) {

  27. for (size_t i = 0; i < num_channels; ++i) {

  28. const T* channel = deinterleaved[i];

  29. size_t interleaved_idx = i;

  30. for (size_t j = 0; j < samples_per_channel; ++j) {

  31. interleaved[interleaved_idx] = channel[j];

  32. interleaved_idx += num_channels;

  33. }

  34. }

  35. }

音频数据的基本操作混音,声道转换,和重采样。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值