mediasoup服务器转发流媒体数据及发送拥塞控制

本文深入探讨了WebRTC中GCC(Generic Congestion Control)拥塞控制的实现,主要关注其作为发送方的角色。内容包括接收端数据转发、基于丢包和延迟的码率控制,以及GCC如何处理网络状态变化来动态调整码率。通过分析RTCP反馈和接收端的RTT计算,GCC能够有效监测网络拥塞,从而优化音视频流的传输质量。
摘要由CSDN通过智能技术生成

   webrtc服务器mediasoup即是媒体流的接收方也是媒体流的发送方,还是控制消息的转发方。
   作为接收方要接收音视频数据,码率统计,接收发送方发来的RTCP(SR,SDES等)包等等,给发送方发送RTPFB包及时反馈接收情况;
   作为发送方,要给接其他消费者发送媒体数据,发送RTCP报告SR,处理接收方发来的RTPFB包及其他RTCP包,发送拥塞控制等等;
   作为控制拥塞消息转发方,需要流媒体数据的接收方发来的消息请求(如PSFB反馈请求关键帧)然后处理转发给流媒体数据的发送方。
这里讨论mediasoup作为发送方如何处理。
一、转发音视频数据
   mediasoup接收到媒体数据后,需要转发给所有comsumer,过程简单,流程如下:
在这里插入图片描述
接收到媒体数据后,经过接收处理(在mediasoup服务器接收流媒体数据及接收拥塞控制文中介绍)后。转发给其他参会者,即其他consumers,在转发到网络之前,会重新在rtp扩展头中写入abs-send-time,如果基于延迟的拥塞控制在接收端处理,那么这个时间一定要带上。
abs-send-time: 是6+18固定24位浮点数,高6位单位为秒(最大26=64s),低18位单位为1/(218)秒(约3.8us),左移8位得到32位时间戳,也就是乘以256。那么原本1 / 218秒的精度换成时间戳后精度变成了1/218+6秒=1000/226毫秒,这个是到达时间滤波器需要的数据类型。
   最新的版本GCC是在发送端处理基于延时的拥塞控制,因此,还需要将发送的数据包构造PacketFeedback一份,保存到GCC的反馈适配器模块TransportFeedbackAdapter的发送历史队列,PacketFeedback结构信息info:

PacketFeedback{
  int64_t creation_time_ms;//发送端创建该结构体时的时间
  int64_t arrival_time_ms;//反馈包到达时间,针对解析后的RTPFB包,对于发送的包为无效值kNotReceived
  int64_t send_time_ms;//包发送时间,发送端设置
  uint16_t sequence_number;//包序列号,+1增长,发送端设置
  int64_t long_sequence_number;//会话唯一的包标识符,每个包递增1,由发送端生成
  size_t payload_size;//包净荷大小
  size_t unacknowledged_data;//不属于反馈部分的上述数据包的大小。
  uint16_t local_net_id;//网络路由标识与此数据包相关联的网络路由。
  uint16_t remote_net_id;
  PacedPacketInfo pacing_info;包的pace信息
  absl::optional<uint32_t> ssrc;//此反馈包所指向的报文的SSRC和RTP序列号。
  uint16_t rtp_sequence_number;//包的rtp序列号
  }

   webrtc及webrtc服务器中还用到了一个模块:应用限制区域检测器(AlrDetector),原理是利用某段时间(协议设定500ms),以及这段时间发送的字节数判断当前输出网络流量是否受限。举例说明下
   如果带宽预测输出目标码率为target_bitrate_bps,则pacer按此码率发送数据,但有些原因使得输出的码率低于target_bitrate_bps,会引起带宽利用率不足,相应的,AlrDetector会做出对码率调整有一定影响的动作。AlrDetector实体中有一个IntervalBudget,表示时间间隔内的预算,主要统计在目标码率下已发送字节数、剩余发送字节数,发送比率等,其最大预算计算如下:

void IntervalBudget::set_target_rate_kbps(int target_rate_kbps) {
  target_rate_kbps_ = target_rate_kbps;//由拥塞控制输出后的最终目标码率
  max_bytes_in_budget_ = (kWindowMs * target_rate_kbps_) / 8;//kWindowMs=500毫秒内最大预算,
  bytes_remaining_ = std::min(std::max(-max_bytes_in_budget_, bytes_remaining_),
                              max_bytes_in_budget_);
}

每发送一个数据包,都会通过AlrDetector来监测,满足以下两个条件都会激活AlrDetector(state_changed = true):

  1. 剩余发送码率占比(剩余字节/最大预算字节)大于一个阈值,且AlrDetector处于休眠状态(state_changed = false);
  2. 剩余发送码率占比(剩余字节/最大预算字节)小于一个阈值,且AlrDetector处于激活状态(state_changed = true);
void AlrDetector::OnBytesSent(size_t bytes_sent, int64_t send_time_ms) {
  if (!last_send_time_ms_.has_value()) {
    last_send_time_ms_ = send_time_ms;
    // Since the duration for sending the bytes is unknwon, return without
    // updating alr state.
    return;
  }
  int64_t delta_time_ms = send_time_ms - *last_send_time_ms_;
  last_send_time_ms_ = send_time_ms;

  alr_budget_.UseBudget(bytes_sent);
  alr_budget_.IncreaseBudget(delta_time_ms);//发送了一个包bytes_sent,更新剩余发送字节数bytes_remaining_ =bytes_remaining_ - 
  //static_cast<int>(bytes)
  bool state_changed = false;
  if (alr_budget_.budget_ratio() > start_budget_level_ratio_ &&
      !alr_started_time_ms_) {//剩余发送码率占比(剩余字节/最大预算字节)大于一个阈值start_budget_level_ratio_ =0.8,且AlrDetector处于
      //休眠状态
    alr_started_time_ms_.emplace(DepLibUV::GetTimeMsInt64());
    state_changed = true;
  } else if (alr_budget_.budget_ratio() < stop_budget_level_ratio_ &&
             alr_started_time_ms_) {//剩余发送码率占比(剩余字节/最大预算字节)小于一个阈值stop_budget_level_ratio_ =0.5,且AlrDetector
             //处于激活状态
    state_changed = true;
    alr_started_time_ms_.reset();
  }

  if (state_changed)
    MS_DEBUG_DEV("state changed");
}

   那么,如果AlrDetector处于激活状态后,它会有什么动作呢?
二、GCC拥塞控制
GCC采用了两种拥塞控制算法:
基于延迟(delay-based)的拥塞控制算法:早期的实现是由数据的接收方实现,接收方需要记录每个数据包到达的时间和大小,并计算每个数据分组之间(inter-group)的延迟的变化,由此判断当前网络的拥塞情况,并将最终输出码率估计值通过RTCP feedback(TMMBR或 REMB)反馈给发送方;
基于丢包(loss-based)的拥塞控制算法:由数据的发送方来实现,发送方通过从接收方周期性发来的RTCP RR(Receiver Report)包中获取丢包信息并计算RTT进行码率估计。
2.1、基于丢包的码率控制
   GCC算法在发送端基于丢包率控制发送码率,其基本思想是:丢包率反映网络拥塞状况。如果丢包率很小或者为0,说明网络状况良好,在不超过预设最大码率的情况下,可以增大发送端码率;反之如果丢包率变大,说明网络状况变差,此时应减少发送端码率。在其它情况下,发送端码率保持不变。
   基于丢包的码率控制实现原理是基于媒体接后端发送的RR反馈包计算RTT,以及获取丢包率和丢包数来控制,将这些信息应用到拥塞控制模块rtpTransportControllerSend来评估新的发送码率。RR包结构在mediasoup服务器接收流媒体数据及接收拥塞控制中有介绍,简易流程如下:
在这里插入图片描述
在这里插入图片描述

基于丢包的拥塞控制根据RR计算RTT(Round-Trip Time):往返时延。在计算机网络中它是一个重要的性能指标,表示从发送端发送数据开始,到发送端收到来自接收端的确认(接收端收到数据后便立即发送确认),总共经历的时延,计算方法:

在这里插入图片描述
   如上图所示:发送端发送SR的时间为T(S_SR),接收端接收到SR的时间为T(LSR),发送RR的时间为T(S_RR),接收端接收到RR包的时间为T(R_RR),RTT的时间计算:
所以在理想情况下:
                  RTT=(T(LSR)-T(S_SR))+(T(R_RR)-T(S_RR))
理想情况是T(LSR)=T(S_RR),则
                  RTT=T(R_RR)-T(S_SR)
但实际T(S_RR)大于T(LSR),T(DLSR)=T(S_RR)-T(LSR),由此
                  RTT = T(R_RR)-T(S_SR)-DLSR;

代码如下:

void RtpStreamSend::ReceiveRtcpReceiverReport(RTC::RTCP::ReceiverReport* report)
	{
		uint64_t nowMs = DepLibUV::GetTimeMs();
		auto ntp       = Utils::Time::TimeMs2Ntp(nowMs);//获取当前绝对时间NTP

		// Get the compact NTP representation of the current timestamp.
		uint32_t compactNtp = (ntp.seconds & 0x0000FFFF) << 16;
		compactNtp |= (ntp.fractions & 0xFFFF0000) >> 16;//获取当前绝对时间NTP对应的时间戳

		uint32_t lastSr = report->GetLastSenderReport();//接收端接收到最近的SR时的时间
		uint32_t dlsr   = report->GetDelaySinceLastSenderReport();//接收端接收到最近的SR到发送RR的时间间隔

		// RTT in 1/2^16 second fractions.
		uint32_t rtt{ 0 };

		// If no Sender Report was received by the remote endpoint yet, ignore lastSr
		// and dlsr values in the Receiver Report.
		if (lastSr && dlsr && (compactNtp > dlsr + lastSr))
			rtt = compactNtp - dlsr - lastSr;//rtt表示发送端发送sr到接收端,接收端发送RR返回给发送端的时间,rtt不包含接收到最近的SR到
			//发送RR的时间间隔

		// RTT in milliseconds.
		this->rtt = static_cast<float>(rtt >> 16) * 1000;
		this->rtt += (static_cast<float>(rtt & 0x0000FFFF) / 65536) * 1000;

		if (this->rtt > 0.0f)
			this->hasRtt = true;

		this->packetsLost  = report->GetTotalLost();
		this->fractionLost = report->GetFractionLost();
	}

根据RR计算丢包率:1.计算总包数和丢包数

void RtpTransportControllerSend::OnReceivedRtcpReceiverReportBlocks(
    const ReportBlockList& report_blocks,
    int64_t now_ms) {
  if (report_blocks.empty())
    return;

  int total_packets_lost_delta = 0;
  int total_packets_delta = 0;

  // Compute the packet loss from all report blocks.
  for (const RTCPReportBlock& report_block : report_blocks) {
    auto it = last_report_blocks_.find(report_block.source_ssrc);
    if (it != last_report_blocks_.end()) {
      auto number_of_packets = report_block.extended_highest_sequence_number -
                        it->second.extended_highest_sequence_number;//根据RR包的基础序列号计算理论上在RR间隔内应该收到的包个数
      total_packets_delta += number_of_packets;
      auto lost_delta = report_block.packets_lost - it->second.packets_lost;//根据RR包计算在RR间隔内丢包个数
      total_packets_lost_delta += lost_delta;
    }
    last_report_blocks_[report_block.source_ssrc] = report_block;
  }
  // Can only compute delta if there has been previous blocks to compare to. If
  // not, total_packets_delta will be unchanged and there's nothing more to do.
  if (!total_packets_delta)
    return;
  int packets_received_delta = total_packets_delta - total_packets_lost_delta;
  // To detect lost packets, at least one packet has to be received. This check
  // is needed to avoid bandwith detection update in
  // VideoSendStreamTest.SuspendBelowMinBitrate

  if (packets_received_delta < 1)
    return;
  Timestamp now = Timestamp::ms(now_ms);
  TransportLossReport msg;//构造gcc结构信息,前面的计算就是为这个结构信息服务的
  msg.packets_lost_delta = total_packets_lost_delta;//丢包个数
  msg.packets_received_delta = packets_received_delta;//接收到包个数
  msg.receive_time = now;
  msg.start_time = last_report_block_time_;
  msg.end_time = now;

  PostUpdates(controller_->OnTransportLossReport(msg));
  last_report_block_time_ = now;
}

带宽估计模块 bandwidth_estimation_处理丢包信息

NetworkControlUpdate GoogCcNetworkController::OnTransportLossReport(
    TransportLossReport msg) {
  if (packet_feedback_only_)
    return NetworkControlUpdate();
  int64_t total_packets_delta =
      msg.packets_received_delta + msg.packets_lost_delta;
  bandwidth_estimation_->UpdatePacketsLost(
      msg.packets_lost_delta, total_packets_delta, msg.receive_time);
  return NetworkControlUpdate();
}
void SendSideBandwidthEstimation::UpdatePacketsLost(int packets_lost,
                                                    int number_of_packets,
                                                    Timestamp at_time) {
  last_loss_feedback_ = at_time;
  if (first_report_time_.IsInfinite())
    first_report_time_ = at_time;

  // Check sequence number diff and weight loss report
  if (number_of_packets > 0) {
    // Accumulate reports.
    lost_packets_since_last_loss_update_ += packets_lost;
    expected_packets_since_last_loss_update_ += number_of_packets;

    // Don't generate a loss rate until it can be based on enough packets.
    if (expected_packets_since_last_loss_update_ < kLimitNumPackets)
      return;

    has_decreased_since_last_fraction_loss_ = false;
    int64_t lost_q8 = lost_packets_since_last_loss_update_ << 8;
    int64_t expected = expected_packets_since_last_loss_update_;
    last_fraction_loss_ = std::min<int>(lost_q8 / expected, 255);//计算后的丢包率

    // Reset accumulators.

    lost_packets_since_last_loss_update_ = 0;
    expected_packets_since_last_loss_update_ = 0;
    last_loss_packet_report_ = at_time;
    UpdateEstimate(at_time);//计算的丢包率更新到 bandwidth_estimation_,输出目标码率
  }
  UpdateUmaStatsPacketsLost(at_time, packets_lost);
}

   计算的丢包率更新到 bandwidth_estimation_,输出目标码率
丢失率loss=丢包率/265f
   如果loss < 2%,表明当前网络状态良好,适当增加发送码率,tartget_bitrate=原目标码率pre_target_bitrate1.08%
   如果2%<loss<10%,表明网络正处于适中状态,继续按原来码率发送给
   如果loss>10%,表明网络拥塞,适当降低码率,tartget_bitrate= 原目标码率pre_target_bitrate
(1 - 0.5*loss);
   代码实现如下:

void SendSideBandwidthEstimation::UpdateEstimate(Timestamp at_time) {
  DataRate new_bitrate = current_bitrate_;
  if (rtt_backoff_.CorrectedRtt(at_time) > rtt_backoff_.rtt_limit_) {//判断反馈到达时间与上一次反馈到达时间间隔太长超过rtt上限阈值rtt_limit_=max()
    if (at_time - time_last_decrease_ >= rtt_backoff_.drop_interval_ &&//距离上一次大幅度将码率时间超过drop_interval_=300毫秒,且当前码率超过bandwidth_floor_=5Kbps
        current_bitrate_ > rtt_backoff_.bandwidth_floor_) {
      time_last_decrease_ = at_time;
      new_bitrate = std::max(current_bitrate_ * rtt_backoff_.drop_fraction_,//将码率将为原来码率的一半
                             rtt_backoff_.bandwidth_floor_.Get());
      link_capacity_.OnRttBackoff(new_bitrate, at_time);
    }
    CapBitrateToThresholds(at_time, new_bitrate);//输出最终码率
    return;
  }

  // 如果没有任何丢包报告,我们将使用REMB和(或)基于时延的估计,以允许启动比特率探测。
  if (last_fraction_loss_ == 0 && IsInStartPhase(at_time)) {
    new_bitrate = std::max(bwe_incoming_, new_bitrate);//bwe_incoming为REMB从接收端带回的评估码率
    new_bitrate = std::max(delay_based_bitrate_, new_bitrate);//delay_based_bitrate_为基于延时的评估码率,目标码率基于三者最大
    if (loss_based_bandwidth_estimation_.Enabled()) {
      loss_based_bandwidth_estimation_.SetInitialBitrate(new_bitrate);
    }

    if (new_bitrate != current_bitrate_) {
      min_bitrate_history_.clear();
      if (loss_based_bandwidth_estimation_.Enabled()) {
        min_bitrate_history_.push_back(std::make_pair(at_time, new_bitrate));
      } else {
        min_bitrate_history_.push_back(//码率如果有更新,记录当前时间对应的目标码率
            std::make_pair(at_time, current_bitrate_));
      }
      CapBitrateToThresholds(at_time, new_bitrate);
      return;
    }
  }
  UpdateMinHistory(at_time);
  if (last_loss_packet_report_.IsInfinite()) {
    // No feedback received.
    CapBitrateToThresholds(at_time, current_bitrate_);//输出最终码率
    return;
  }

  if (loss_based_bandwidth_estimation_.Enabled()) {
    loss_based_bandwidth_estimation_.Update(
        at_time, min_bitrate_history_.front().second, last_round_trip_time_);
    new_bitrate = MaybeRampupOrBackoff(new_bitrate, at_time);
    CapBitrateToThresholds(at_time, new_bitrate);
    return;
  }

  TimeDelta time_since_loss_packet_report = at_time - last_loss_packet_report_;
  TimeDelta time_since_loss_feedback = at_time - last_loss_feedback_;
  if (time_since_loss_packet_report < 1.2 * kMaxRtcpFeedbackInterval) {
    float loss = last_fraction_loss_ / 256.0f;//丢失比例=丢包率/256
    if (current_bitrate_ < bitrate_threshold_ || loss <= low_loss_threshold_) {
      // 如果丢失率Loss < 2%或者码率超过码率阈值:带宽增加时间间隔内,在上一个目标码率基础上增加8%的码率,我们只在比特率超过阈值时才根据损失率Loss<2做出决定,这是一种处理与拥塞无关的损失的粗略方法
      new_bitrate =
          DataRate::bps(min_bitrate_history_.front().second.bps() * 1.08 + 0.5);
      new_bitrate += DataRate::bps(1000);//额外增加1kbps,只是为了确保码流不会卡住(在低速率下增加一点额外的速率,在高速率下可以忽略不计)。
    } else if (current_bitrate_ > bitrate_threshold_) {
      if (loss <= high_loss_threshold_) {
        // 如果丢失率2%<Loss < 10%,维持当前码率
      } else {
        //如果丢失率Loss >= 10%, Limit the rate decreases to once a kBweDecreaseInterval
        if (!has_decreased_since_last_fraction_loss_ &&
            (at_time - time_last_decrease_) >=
                (kBweDecreaseInterval + last_round_trip_time_)) {
          time_last_decrease_ = at_time;
          new_bitrate =
              DataRate::bps((current_bitrate_.bps() *
                             static_cast<double>(512 - last_fraction_loss_)) /
                            512.0);
          has_decreased_since_last_fraction_loss_ = true;
        }
      }
    }
  } else if (time_since_loss_feedback >
                 kFeedbackTimeoutIntervals * kMaxRtcpFeedbackInterval &&//超过1500毫秒间隔内没有收到丢包反馈
             (last_timeout_.IsInfinite() ||
              at_time - last_timeout_ > kTimeoutInterval)) {//处理反馈超时,1000毫秒
    if (in_timeout_experiment_) {
      MS_WARN_TAG(bwe, "Feedback timed out (%s), reducint bitrate",
                          ToString(time_since_loss_feedback).c_str());
      new_bitrate = new_bitrate * 0.8;//降低码率到原来的80%
      //重新设置累加器,因为我们已经对丢失的反馈采取了行动,不应该再对这些丢失的旧数据包采取行动
      lost_packets_since_last_loss_update_ = 0;
      expected_packets_since_last_loss_update_ = 0;
      last_timeout_ = at_time;
    }
  }

  CapBitrateToThresholds(at_time, new_bitrate);//输出最终码率
}

输出最终的目标码率,需要根据接收端REMB反馈的码率(如果有的话),基于延迟的估计码率,基于丢包的估计码率三者选择最小的码率作为目标码率,代码如下:

void SendSideBandwidthEstimation::CapBitrateToThresholds(Timestamp at_time,
                                                         DataRate bitrate) {
  if (bwe_incoming_ > DataRate::Zero() && bitrate > bwe_incoming_) {//如果REMB带回了评估码率bwe_incoming_,取bwe_incoming_与基于丢包评估码率中小者。
    MS_DEBUG_DEV("bwe_incoming_:%lld", bwe_incoming_.bps());
    bitrate = bwe_incoming_;
  }
  if (delay_based_bitrate_ > DataRate::Zero() &&
      bitrate > delay_based_bitrate_) {//取基于延迟拥塞控制的评估码率与基于丢包拥塞控制评估码率中小者。
    MS_DEBUG_DEV("delay_based_bitrate_:%lld", delay_based_bitrate_.bps());
    bitrate = delay_based_bitrate_;
  }
  if (loss_based_bandwidth_estimation_.Enabled() &&
      loss_based_bandwidth_estimation_.GetEstimate() > DataRate::Zero()) {//取基于丢包码带宽评估的评估码率与基于丢包拥塞控制评估码率中小者。
    MS_DEBUG_DEV("loss_based_bandwidth_estimation_.GetEstimate():%lld", loss_based_bandwidth_estimation_.GetEstimate().bps());
    bitrate = std::min(bitrate, loss_based_bandwidth_estimation_.GetEstimate());
  }
  if (bitrate > max_bitrate_configured_) {//如果码率超出最大码率,取最大码率
    MS_DEBUG_DEV("bitrate > max_bitrate_configured_, setting bitrate to max_bitrate_configured_");
    bitrate = max_bitrate_configured_;
  }
  if (bitrate < min_bitrate_configured_) {//如果码率低于最小码率,取最小码率
    MS_DEBUG_DEV("bitrate < min_bitrate_configured_");
    if (last_low_bitrate_log_.IsInfinite() ||
        at_time - last_low_bitrate_log_ > kLowBitrateLogPeriod) {
      MS_WARN_TAG(bwe, "Estimated available bandwidth %s"
                        " is below configured min bitrate %s",
                        ToString(bitrate).c_str(),
                        ToString(min_bitrate_configured_).c_str());
      last_low_bitrate_log_ = at_time;
    }
    bitrate = min_bitrate_configured_;
  }

  if (bitrate != current_bitrate_ ||
      last_fraction_loss_ != last_logged_fraction_loss_ ||
      at_time - last_rtc_event_log_ > kRtcEventLogPeriod) {//如果码率和上一次目标码率不同,取最新码率
    last_logged_fraction_loss_ = last_fraction_loss_;
    last_rtc_event_log_ = at_time;
  }
  MS_DEBUG_DEV("current_bitrate_:%lld", current_bitrate_.bps());
  current_bitrate_ = bitrate;

  if (acknowledged_rate_) {
    link_capacity_.OnRateUpdate(std::min(current_bitrate_, *acknowledged_rate_),
                                at_time);
  }
}

以上是通过丢包率来评估码率,需要将计算的RTT设置到基于延迟的拥塞控制中

NetworkControlUpdate GoogCcNetworkController::OnRoundTripTimeUpdate(
    RoundTripTimeUpdate msg) {
  if (packet_feedback_only_ || msg.smoothed)
    return NetworkControlUpdate();
  //RTC_DCHECK(!msg.round_trip_time.IsZero());
  if (delay_based_bwe_)
    delay_based_bwe_->OnRttUpdate(msg.round_trip_time);//更新基于延迟的拥塞控制RTT
  bandwidth_estimation_->UpdateRtt(msg.round_trip_time, msg.receive_time);//更新带宽估计模块中的RTT
  return NetworkControlUpdate();
}

2.2、基于延迟的码率控制
   最新webrtc或webrtc服务器中gcc拥塞控制中的基于延迟的拥塞控制在是接收到返回的RTPFB包时在发送端处理。有关理论参阅GCC拥塞控制一文,在这里只分析webrtc及webrtc服务器源码中对基于延迟拥塞控制的实现。代码实现总体流程如下图:
在这里插入图片描述

   首先由反馈适配器模块类TransportFeedbackAdapter从RTPFB中解析状态包,状态包中记录了接收端接收到的和没接收到的所有包的状态,总结为两个状态:Received和UnReceived。

const std::vector<webrtc::rtcp::ReceivedPacket> GetReceivedPackets(
			const RTC::RTCP::FeedbackRtpTransportPacket* packet)//packet为RTPFB解析后的反馈包
{
	std::vector<webrtc::rtcp::ReceivedPacket> receivedPackets;

	for (auto& packetResult : packet->GetPacketResults())//GetPacketResults()函数从chunk包中提取每一个包,并标识状态
	{
	  if (packetResult.received)
	    receivedPackets.emplace_back(packetResult.sequenceNumber, packetResult.delta);//筛选出接收端接收到的包
	}

	return receivedPackets;
};

RTPFB种chunk使用的有两种:runLengthChunk和twoBitVectorChunk。runLengthChunk表示chunk中记录的包都是同一个状态的包,都是接收到的或者都是没有接收到的。

void FeedbackRtpTransportPacket::RunLengthChunk::FillResults(
		  std::vector<struct FeedbackRtpTransportPacket::PacketResult>& packetResults,
		  uint16_t& currentSequenceNumber) const
	{
		MS_TRACE();
		bool received = (this->status == Status::SmallDelta || this->status == Status::LargeDelta);
		for (uint16_t count{ 1u }; count <= this->count; ++count)
		{
			packetResults.emplace_back(++currentSequenceNumber, received);
		}
	}

twoBitVectorChunk表示chunk中记录的包有两种状态,接收到的或没有接收到的。

void FeedbackRtpTransportPacket::TwoBitVectorChunk::FillResults(
		  std::vector<struct FeedbackRtpTransportPacket::PacketResult>& packetResults,
		  uint16_t& currentSequenceNumber) const
{
	MS_TRACE();

	for (auto status : this->statuses)
	{
		bool received = (status == Status::SmallDelta || status == Status::LargeDelta);

		packetResults.emplace_back(++currentSequenceNumber, received);
	}
}

对于接收到的包,需要计算每个包到达接收端的时间,这就需要RTPFB包中携带的deta:deta表示与上一个接收到的包之间的时间差,因此知道上一个包的到达时间就可以通过deta计算本包的到达时间。

for (size_t idx{ 0u }; idx < packetResults.size(); ++idx)
{
	auto& packetResult = packetResults[idx];

	if (!packetResult.received)
		continue;

	currentReceivedAtMs += this->deltas.at(deltaIdx) / 4;//this->deltasRTPFB包中携带的deta
	packetResult.delta        = this->deltas.at(deltaIdx);
	packetResult.receivedAtMs = currentReceivedAtMs;
	deltaIdx++;
}

因为属于反馈包,所以将状态包封装成PacketFeedback结构信息,信息内容上文已介绍。代码如下:

std::vector<PacketFeedback> TransportFeedbackAdapter::GetPacketFeedbackVector(
    const RTC::RTCP::FeedbackRtpTransportPacket& feedback,
    Timestamp feedback_time) {
  // 将时间戳增量添加到第一个包到达时选定的本地时间基上。这不是真正的时间基数,但使手动检查时间戳变得更容易
  if (last_timestamp_us_ == kNoTimestamp) {
    current_offset_ms_ = feedback_time.ms();
  } else {
    current_offset_ms_ +=
      mediasoup_helpers::FeedbackRtpTransport::GetBaseDeltaUs(&feedback, last_timestamp_us_) / 1000;
  }
  last_timestamp_us_ =
    mediasoup_helpers::FeedbackRtpTransport::GetBaseTimeUs(&feedback);

  std::vector<PacketFeedback> packet_feedback_vector;
  if (feedback.GetPacketStatusCount() == 0) {
    MS_WARN_DEV("empty transport feedback packet received");
    return packet_feedback_vector;
  }
  packet_feedback_vector.reserve(feedback.GetPacketStatusCount());//构造PacketFeedback结构输出
  {
    size_t failed_lookups = 0;
    int64_t offset_us = 0;
    int64_t timestamp_ms = 0;
    uint16_t seq_num = feedback.GetBaseSequenceNumber();
    for (const auto& packet : mediasoup_helpers::FeedbackRtpTransport::GetReceivedPackets(&feedback)) {//GetReceivedPackets(&feedback)返回的是状态为Received的包
      //构造丢失包的PacketFeedback,插入到PacketFeedback队列
      for (; seq_num != packet.sequence_number(); ++seq_num) {
        PacketFeedback packet_feedback(PacketFeedback::kNotReceived, seq_num);
        if (!send_time_history_.GetFeedback(&packet_feedback, false))//不从发送历史队列中移除
          ++failed_lookups;
        if (packet_feedback.local_net_id == local_net_id_ &&
            packet_feedback.remote_net_id == remote_net_id_) {//包的本地网络id和远程网络id都对应,则插入队列
          packet_feedback_vector.push_back(packet_feedback);
        }
      }

      // 构造接收到包的PacketFeedback,插入到PacketFeedback队列
      offset_us += packet.delta_us();//根据RTPFB中的deta,往后累积计算每个包的到达时间
      timestamp_ms = current_offset_ms_ + (offset_us / 1000);//计算每个包的到达时间
      PacketFeedback packet_feedback(timestamp_ms, packet.sequence_number());
      if (!send_time_history_.GetFeedback(&packet_feedback, true))//从发送历史队列中移除
        ++failed_lookups;
      if (packet_feedback.local_net_id == local_net_id_ &&
          packet_feedback.remote_net_id == remote_net_id_) {
        packet_feedback_vector.push_back(packet_feedback);//包的本地网络id和远程网络id都对应,则插入队列
      }
      ++seq_num;
    }

    if (failed_lookups > 0) {
      MS_WARN_DEV("failed to lookup send time for %zu"
                  " packet%s, send time history too small?",
                  failed_lookups,
                  (failed_lookups > 1 ? "s" : ""));
    }
  }
  return packet_feedback_vector;
}

   反馈包应与发送时保存在反馈适配器中的发送历史队列相对应,正常情况下,发送历史队列中保存的包一定是发送出去了,至于网络丢包,发送端不关心。收到的包和丢失的包在RTPFB中都会记录,如果收到了,则从发送历史队列中移除,否则保留在发送历史队列中。
返回封装的std::vector后,需要封装成GCC支持的消息格式TransportPacketsFeedback结构信息:

TransportPacketsFeedback msg;
  for (const PacketFeedback& rtp_feedback : feedback_vector) {//feedback_vector为解析后返回的std::vector<PacketFeedback>
    if (rtp_feedback.send_time_ms != PacketFeedback::kNoSendTime) {//反馈包有发送时间信息的
      auto feedback = NetworkPacketFeedbackFromRtpPacketFeedback(rtp_feedback);
          feedback.sent_packet.sequence_number,
          feedback.sent_packet.send_time.ms(),
          feedback.sent_packet.size.bytes(),
          feedback.receive_time.ms());

      msg.packet_feedbacks.push_back(feedback);
    } else if (rtp_feedback.arrival_time_ms == PacketFeedback::kNotReceived) {//反馈包没有发送时间信息的,且没有到达时间信息
      MS_DEBUG_DEV("--- rtp_feedback.arrival_time_ms == PacketFeedback::kNotReceived ---");
      msg.sendless_arrival_times.push_back(Timestamp::PlusInfinity());
    } else {//反馈包没有发送时间信息的,但有到达时间信息
      msg.sendless_arrival_times.push_back(
          Timestamp::ms(rtp_feedback.arrival_time_ms));
    }
  }
  {
    absl::optional<int64_t> first_unacked_send_time_ms =//记录第一个没有确认的包的发送时间信息,即丢失的包发送信息
        send_time_history_.GetFirstUnackedSendTime();
    if (first_unacked_send_time_ms)
      msg.first_unacked_send_time = Timestamp::ms(*first_unacked_send_time_ms);
  }
  msg.feedback_time = feedback_receive_time;//发聩包RTPFB的接收时间
  msg.prior_in_flight = prior_in_flight;//对应通道的发送字节,如本地网络id=0,远程网络id=1的这一通道上发送的包的字节数,
  msg.data_in_flight = GetOutstandingData();//对应通道的丢失的字节数,如本地网络id=0,远程网络id=1的这一通道上丢失的包的字节数,

将封装TransportPacketsFeedback的信息发送到GCC模块中的基于延迟的拥塞控制中处理:包组划分,在包组中,除第一个包外,后面的包距离当前包组中国第一个包的发送时间差小于5ms。假设每个包都有一个发送时间t,第一个包的发送时间为t0,如果后续的包发送时间与t0时间差△t = t - to <= 5ms,那么这些包都可以与t0发送时间的包归为一组,如果某个包得到的△t > 5ms,那么该包就作为下一个包组的第一个包,接着后面的包就跟该包继续比较时间差,判断能否归为一组。用一张图距离说明下。
在这里插入图片描述
如果RTPFB中有pack1——pack7七个反馈包,发送时间为st1——st6,其中pack1的包到pack4的时间差小于5毫秒,pack5与pack1的时间差大于5毫秒,则pack1——pack4属于同一个包组group1,pack5属于下一个包组的第一个包,由于pack6与pack5时间差大于5毫秒,因此group2只有一个pack5包,以此次类推。代码实现如下:

void DelayBasedBwe::IncomingPacketFeedback(const PacketResult& packet_feedback,
                                           Timestamp at_time) {
  // 初始化时创建inter_arrival_和delay_detector_趋势预测估计实例
  if (last_seen_packet_.IsInfinite() ||
      at_time - last_seen_packet_ > kStreamTimeOut) {
    inter_arrival_.reset(
        new InterArrival((kTimestampGroupLengthMs << kInterArrivalShift) / 1000,
                         kTimestampToMs, true));
    delay_detector_.reset(
        new TrendlineEstimator(key_value_config_, network_state_predictor_));
  }
  last_seen_packet_ = at_time;

  uint32_t send_time_24bits =
      static_cast<uint32_t>(
          ((static_cast<uint64_t>(packet_feedback.sent_packet.send_time.ms())
            << kAbsSendTimeFraction) +
           500) /
          1000) &
      0x00FFFFFF;//计算包的发送时间,算法与abs_send_time一样,单位毫秒,与0x00FFFFFF表示取前24位时间戳
  // Shift up send time to use the full 32 bits that inter_arrival works with,
  // so wrapping works properly.
  uint32_t timestamp = send_time_24bits << kAbsSendTimeInterArrivalUpshift;//凑满32位

  uint32_t ts_delta = 0;
  int64_t t_delta = 0;
  int size_delta = 0;
  bool calculated_deltas = inter_arrival_->ComputeDeltas(//计算包组的发送时间差和接收时间差ts_delta=包发送时间差,t_delta=接收时间差
      timestamp, packet_feedback.receive_time.ms(), at_time.ms(),
      packet_feedback.sent_packet.size.bytes(), &ts_delta, &t_delta,
      &size_delta);
  double ts_delta_ms = (1000.0 * ts_delta) / (1 << kInterArrivalShift);
  delay_detector_->Update(t_delta, ts_delta_ms,//通过计算的发送时间差和接收时间差,计算包组延迟时间时间差,并更新趋势预测估计
                          packet_feedback.sent_packet.send_time.ms(),
                          packet_feedback.receive_time.ms(), calculated_deltas);
}

ComputeDeltas函数主要完成包组划分,以及计算相邻包组时间的发送时间差和接收时间差,延时检测实例(趋势预测)的Update函数功能是计算组件延迟,并累积采集样本数后通过最小二乘线性回归预测延迟趋势,根据趋势判断网络使用率。
ComputeDeltas函数实现流程如下:
在这里插入图片描述
包组的发送时间差,是指两个包组中最后一个包的发送时间差,接收时间差。是指两个包组中最后一个包的接收时间差,代码如下:

bool InterArrival::ComputeDeltas(uint32_t timestamp,
                                 int64_t arrival_time_ms,
                                 int64_t system_time_ms,
                                 size_t packet_size,
                                 uint32_t* timestamp_delta, // 待计算的发送时间差.
                                 int64_t* arrival_time_delta_ms, // 待计算的接收时间差.
                                 int* packet_size_delta) {
  MS_ASSERT(timestamp_delta != nullptr, "timestamp_delta is null");
  MS_ASSERT(arrival_time_delta_ms != nullptr, "arrival_time_delta_ms is null");
  MS_ASSERT(packet_size_delta != nullptr, "packet_size_delta is null");
  bool calculated_deltas = false;
  if (current_timestamp_group_.IsFirstPacket()) {//初始化捕获第一个包组的第一个包
  //赋予新包组的第一个包发送时间和到达时间信息
    current_timestamp_group_.timestamp = timestamp;
    current_timestamp_group_.first_timestamp = timestamp;
    current_timestamp_group_.first_arrival_ms = arrival_time_ms;
  } else if (!PacketInOrder(timestamp, arrival_time_ms)) {//判断包是否是乱序
    return false;//乱序忽略
  } else if (NewTimestampGroup(arrival_time_ms, timestamp)) {//判断包是否属于新包组
    // First packet of a later frame, the previous frame sample is ready.
    if (prev_timestamp_group_.complete_time_ms >= 0) {//判断是否存在前一个包组,因为要计算包组间延时,需要两个相邻包组
      *timestamp_delta =
          current_timestamp_group_.timestamp - prev_timestamp_group_.timestamp;//计算当前包组和前一包组的发送时间差
      *arrival_time_delta_ms = current_timestamp_group_.complete_time_ms -
                               prev_timestamp_group_.complete_time_ms;//计算当前包组和前一包组的到达时间差
      MS_DEBUG_DEV("timestamp previous/current [%" PRIu32 "/%" PRIu32"] complete time previous/current [%" PRIi64 "/%" PRIi64 "]",
          prev_timestamp_group_.timestamp, current_timestamp_group_.timestamp,
          prev_timestamp_group_.complete_time_ms, current_timestamp_group_.complete_time_ms);
      // Check system time differences to see if we have an unproportional jump
      // in arrival time. In that case reset the inter-arrival computations.
      int64_t system_time_delta_ms =
          current_timestamp_group_.last_system_time_ms -
          prev_timestamp_group_.last_system_time_ms;//计算处理时间差
      if (*arrival_time_delta_ms - system_time_delta_ms >=
          kArrivalTimeOffsetThresholdMs) {
        MS_WARN_TAG(bwe,
            "the arrival time clock offset has changed (diff = %" PRIi64 "ms, resetting",
            *arrival_time_delta_ms - system_time_delta_ms);
        Reset();
        return false;
      }
      if (*arrival_time_delta_ms < 0) {
        // The group of packets has been reordered since receiving its local
        // arrival timestamp.
        ++num_consecutive_reordered_packets_;
        if (num_consecutive_reordered_packets_ >= kReorderedResetThreshold) {
          MS_WARN_TAG(bwe,
                 "packets are being reordered on the path from the "
                 "socket to the bandwidth estimator. Ignoring this "
                 "packet for bandwidth estimation, resetting");
          Reset();
        }
        return false;
      } else {
        num_consecutive_reordered_packets_ = 0;
      }
      MS_ASSERT(*arrival_time_delta_ms >= 0, "arrival_time_delta_ms is < 0");
      *packet_size_delta = static_cast<int>(current_timestamp_group_.size) -
                           static_cast<int>(prev_timestamp_group_.size);
      calculated_deltas = true;
    }
    prev_timestamp_group_ = current_timestamp_group_;//当前包组所有包结束,赋予新包组信息
    // The new timestamp is now the current frame.
    current_timestamp_group_.first_timestamp = timestamp;
    current_timestamp_group_.timestamp = timestamp;
    current_timestamp_group_.first_arrival_ms = arrival_time_ms;
    current_timestamp_group_.size = 0;
    MS_DEBUG_DEV("new timestamp group: first_timestamp:%" PRIu32 ", first_arrival_ms:%" PRIi64,
        current_timestamp_group_.first_timestamp, current_timestamp_group_.first_arrival_ms);
  } else {
    current_timestamp_group_.timestamp =
        LatestTimestamp(current_timestamp_group_.timestamp, timestamp);
  }
  // Accumulate the frame size.
  current_timestamp_group_.size += packet_size;
  current_timestamp_group_.complete_time_ms = arrival_time_ms;
  current_timestamp_group_.last_system_time_ms = system_time_ms;
  return calculated_deltas;
}

计算的包组发送时间差和接收时间差,传入到延时检测(趋势预测)的Update函数,处理流程如下:
在这里插入图片描述
代码实现如下:

void TrendlineEstimator::Update(double recv_delta_ms,
                                double send_delta_ms,
                                int64_t send_time_ms,
                                int64_t arrival_time_ms,
                                bool calculated_deltas) {
  if (calculated_deltas) {
    const double delta_ms = recv_delta_ms - send_delta_ms;//计算组间延迟变化
    ++num_of_deltas_;
    num_of_deltas_ = std::min(num_of_deltas_, kDeltaCounterMax);
    if (first_arrival_time_ms_ == -1)
      first_arrival_time_ms_ = arrival_time_ms;//第一个包的到达时间

    // Exponential backoff filter.
    accumulated_delay_ += delta_ms;//累积组间延迟变化
    // smoothed_delay_ = smoothing_coef_ * smoothed_delay_ +
                      // (1 - smoothing_coef_) * accumulated_delay_;
    smoothed_delay_ = smoothing_coef_ * delta_ms +
                      (1 - smoothing_coef_) * smoothed_delay_;//平滑当前组间延迟变化smoothing_coef_ =0.6
    // Simple linear regression.
    delay_hist_.push_back(std::make_pair(
        static_cast<double>(arrival_time_ms - first_arrival_time_ms_),//保存组间延迟变化样本,样本对(到达时间差,平滑组间延迟变化值)
        smoothed_delay_));
    if (delay_hist_.size() > window_size_)
      delay_hist_.pop_front();
    double trend = prev_trend_;
    if (delay_hist_.size() == window_size_) {//满足样本数时,对样本最小二乘线性预测趋势
      // 延迟趋势可以看作是(send_rate - capacity)/capacity的估计算
      // 0 < trend < 1   ->  延迟增加,队列被填满
      //   trend == 0    ->  延迟无变化
      //   trend < 0     ->  延迟减少,队列被清空
      trend = LinearFitSlope(delay_hist_).value_or(trend);//线性回归算法——时延梯度趋势预测
    }
    Detect(trend, send_delta_ms, arrival_time_ms);//根据trend探测网络状态
  }
  else {
    MS_DEBUG_DEV("no calculated deltas");
  }

  if (network_state_predictor_) {
    hypothesis_predicted_ = network_state_predictor_->Update(
        send_time_ms, arrival_time_ms, hypothesis_);
  }
}

通过trend预测网络状态,在WebRTC中,定义了三种网络状态:normal,overuse,underuse,用于表示当前带宽的使用情况。网络处于overuse状态表示带宽使用过载了,由此判断网络发生拥塞,网络处于underuse状态表示当前带宽未充分利用,理论知识参考GCC拥塞控制中的over-use检测器一节。根据时延梯度增长趋势得到当前的网络状态流程如下。
在这里插入图片描述
代码如下:

void TrendlineEstimator::Detect(double trend, double ts_delta, int64_t now_ms) {
  if (num_of_deltas_ < 2) {//样本数量小于两个,不判断网络状态,网络默认处于正常
    hypothesis_ = BandwidthUsage::kBwNormal;
    return;
  }
  const double modified_trend =
      std::min(num_of_deltas_, kMinNumDeltas) * trend * threshold_gain_;//对组间延迟线性趋势trend的计算,算法有待研究
  prev_modified_trend_ = modified_trend;
  // BWE_TEST_LOGGING_PLOT(1, "T", now_ms, modified_trend);
  // BWE_TEST_LOGGING_PLOT(1, "threshold", now_ms, threshold_);
  if (modified_trend > threshold_) {//大于阈值
    if (time_over_using_ == -1) {
      // Initialize the timer. Assume that we've been
      // over-using half of the time since the previous
      // sample.
      time_over_using_ = ts_delta / 2;
    } else {
      // Increment timer
      time_over_using_ += ts_delta;//累积的发送时间差
    }
    overuse_counter_++;
    if (time_over_using_ > overusing_time_threshold_ && overuse_counter_ > 1) {//检测一次过载无法说明网络处于过载,只有连续处于过载
    //且累积到overusing_time_threshold_=30毫秒的过载才算真正处于过载。
      if (trend >= prev_trend_) {//当前预测趋势比前一个趋势有增长才表示过载情况,不然也不会判定是过载
        time_over_using_ = 0;
        overuse_counter_ = 0;
        MS_DEBUG_DEV("hypothesis_: BandwidthUsage::kBwOverusing");

#if MS_LOG_DEV_LEVEL == 3
        for (auto& kv : delay_hist_) {
          MS_DEBUG_DEV("arrival_time_ms - first_arrival_time_ms_:%f, smoothed_delay_:%f", kv.first, kv.second);
        }
#endif

        hypothesis_ = BandwidthUsage::kBwOverusing;
      }
    }
  } else if (modified_trend < -threshold_) {//小于负阈值,表示
    time_over_using_ = -1;
    overuse_counter_ = 0;
    hypothesis_ = BandwidthUsage::kBwUnderusing;
    MS_DEBUG_DEV("---- BandwidthUsage::kBwUnderusing ---");
  } else {//其他情况
    time_over_using_ = -1;
    overuse_counter_ = 0;
    MS_DEBUG_DEV("---- BandwidthUsage::kBwNormal ---");
    hypothesis_ = BandwidthUsage::kBwNormal;
  }
  prev_trend_ = trend;
  UpdateThreshold(modified_trend, now_ms);//动态更新阈值threshold_
}

动态更新阈值threshold_代码如下,计算阈值的算法等式可以参考GCC拥塞控制中的over-use检测器一节:

void TrendlineEstimator::UpdateThreshold(double modified_trend,
                                         int64_t now_ms) {
  if (last_update_ms_ == -1)
    last_update_ms_ = now_ms;

  if (fabs(modified_trend) > threshold_ + kMaxAdaptOffsetMs) {
    // Avoid adapting the threshold to big latency spikes, caused e.g.,
    // by a sudden capacity drop.
    last_update_ms_ = now_ms;
    return;
  }

  const double k = fabs(modified_trend) < threshold_ ? k_down_ : k_up_;
  const int64_t kMaxTimeDeltaMs = 100;
  int64_t time_delta_ms = std::min(now_ms - last_update_ms_, kMaxTimeDeltaMs);
  threshold_ += k * (fabs(modified_trend) - threshold_) * time_delta_ms;
  threshold_ = rtc::SafeClamp(threshold_, 6.f, 600.f);
  last_update_ms_ = now_ms;
}

最后根据网络状态进行码率更新,码率更新会涉及到到一个模块————码率控制器AimdRateControl,码率控制器最终码率输出基于延迟的码率,具体参考webrtc中GCC拥塞控制模块之码率控制器AimdRateControl
最后结合基于丢包的码率,ack码率,探测prober码率和基于延迟的码率做最后的综合评估。前面已有介绍

void SendSideBandwidthEstimation::CapBitrateToThresholds(Timestamp at_time,
                                                         DataRate bitrate) {
  if (bwe_incoming_ > DataRate::Zero() && bitrate > bwe_incoming_) {
    MS_DEBUG_DEV("bwe_incoming_:%lld", bwe_incoming_.bps());
    bitrate = bwe_incoming_;
  }
  if (delay_based_bitrate_ > DataRate::Zero() &&
      bitrate > delay_based_bitrate_) {
    MS_DEBUG_DEV("delay_based_bitrate_:%lld", delay_based_bitrate_.bps());
    bitrate = delay_based_bitrate_;
  }
  if (loss_based_bandwidth_estimation_.Enabled() &&
      loss_based_bandwidth_estimation_.GetEstimate() > DataRate::Zero()) {
    MS_DEBUG_DEV("loss_based_bandwidth_estimation_.GetEstimate():%lld", loss_based_bandwidth_estimation_.GetEstimate().bps());
    bitrate = std::min(bitrate, loss_based_bandwidth_estimation_.GetEstimate());
  }
  if (bitrate > max_bitrate_configured_) {
    MS_DEBUG_DEV("bitrate > max_bitrate_configured_, setting bitrate to max_bitrate_configured_");
    bitrate = max_bitrate_configured_;
  }
  if (bitrate < min_bitrate_configured_) {
    MS_DEBUG_DEV("bitrate < min_bitrate_configured_");
    if (last_low_bitrate_log_.IsInfinite() ||
        at_time - last_low_bitrate_log_ > kLowBitrateLogPeriod) {
      MS_WARN_TAG(bwe, "Estimated available bandwidth %s"
                        " is below configured min bitrate %s",
                        ToString(bitrate).c_str(),
                        ToString(min_bitrate_configured_).c_str());
      last_low_bitrate_log_ = at_time;
    }
    bitrate = min_bitrate_configured_;
  }

  if (bitrate != current_bitrate_ ||
      last_fraction_loss_ != last_logged_fraction_loss_ ||
      at_time - last_rtc_event_log_ > kRtcEventLogPeriod) {
    last_logged_fraction_loss_ = last_fraction_loss_;
    last_rtc_event_log_ = at_time;
  }
  MS_DEBUG_DEV("current_bitrate_:%lld", current_bitrate_.bps());
  current_bitrate_ = bitrate;

  if (acknowledged_rate_) {
    link_capacity_.OnRateUpdate(std::min(current_bitrate_, *acknowledged_rate_),
                                at_time);
  }
}

三、决策输出最终的传输码率
最终输出码率后作用于编码器和webrtc pacer控制编码速率和发送速率,以达到根据网络状态动态码率调整。

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

爱钻研技术的小羊

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值