BBR 2.0终于来了,目前可以在quic项目中看到相关代码。
作者对BBR 2.0有个简单的介绍[1]。关于ns3上的仿真代码,下载地址[3]。我写了一个review paper[4],对相关算法性能表现进行了对比分析。
在ns3建立一个点到点的链路(bw=3Mbps,owd=100ms, q=300ms),网络中一共有三条流,仿真结果如图所示:
BBRv1,带宽变化情况:
BBRv1,数据包的单向传输时延:
BBRv2,带宽变化情况:
BBRv2,数据包的单向传输时延:
BBR v2在start_up阶段增加对丢包的响应,但是条件稍显苛刻:一轮(看过BBR v1的,就知道这个round trip count的概念)中的丢包超过8个&&丢包的字节数超过了飞行队列的0.02,就认为full bandwidth reached.
if (loss_events_in_round_ >= Params().startup_full_loss_count &&
model_->IsInflightTooHigh(congestion_event)){
model_->set_inflight_hi(bdp);
full_bandwidth_reached_ = true;
}
IsInflightTooHigh的作用,就是判断,网络中是否产生了过多的丢包。参数Params().loss_threshold取值0.02。
bool Bbr2NetworkModel::IsInflightTooHigh(
const Bbr2CongestionEvent& congestion_event) const {
const SendTimeState& send_state = SendStateOfLargestPacket(congestion_event);
if (!send_state.is_valid) {
// Not enough information.
return false;
}
const QuicByteCount inflight_at_send = BytesInFlight(send_state);
if (inflight_at_send > 0 && bytes_lost_in_round > 0) {
QuicByteCount lost_in_round_threshold =
inflight_at_send * Params().loss_threshold;
if (bytes_lost_in_round > lost_in_round_threshold) {
return true;
}
}
return false;
}
在Probe BW阶段,也不再是按照八轮的周期,更新pacing gain, [1,25, 0.75,1,1,1,1,1,1]。pacing gain为1.25,是PROBE_UP阶段;pacing gain为0.75,是PROBE_DOWN阶段,pacing gain为1称为PROBE_CRUISE阶段。同时增加了一个PROBE_REFILL阶段,而拥塞窗口也不再是, 2 ∗ b w ∗ R T T m i n 2*bw*RTT_{min} 2∗bw∗RTTmin,而是被inflight_lo和inflight_hi限制。inflight_lo和inflight_hi和是动态更新的。如果一轮的丢包比例超出阈值(IsInflightTooHigh给出判断),则把当前的飞行队列赋值给inflight_hi,当前网络进入危险区域。
QuicByteCount inflight_at_send = BytesInFlight(send_state);
model_->set_inflight_hi(inflight_at_send);
inflight_lo的配置。bytes_lost_in_round_表明,当丢包出现,inflight_lo才会被配置。
void Bbr2NetworkModel::AdaptLowerBounds(
const Bbr2CongestionEvent& congestion_event) {
if (bytes_lost_in_round_ > 0) {
if (bandwidth_lo_.IsInfinite()) {
bandwidth_lo_ = MaxBandwidth();
}
if (inflight_lo_ == inflight_lo_default()) {
inflight_lo_ = congestion_event.prior_cwnd;
}
bandwidth_lo_ = std::max(bandwidth_latest_, bandwidth_lo_ * (1.0 - kBeta));
inflight_lo_ =
std::max<QuicByteCount>(inflight_latest_, inflight_lo_ * (1.0 - kBeta));
}
}
然而,inflight_hi是会影响带宽探索的,可能出现有发送速率,但是cwnd却不允许发送的情况。这就是为什么在BBR v1.0中cwnd总被设置为 2 ∗ b w ∗ R T T m i n 2*bw*RTT_{min} 2∗bw∗RTTmin。inflight_hi是怎么更新的呢?我原来以为ProbeInflightHighUpward中实现的是additive increase的模式。但是看了其 cycle_.probe_up_bytes的更新方式,这个是指数增的模式,每个RTT,窗口增加的的步调是,1,2,4,8,指数增长。这个也在ppt[1]中有所说明。
void Bbr2ProbeBwMode::ProbeInflightHighUpward(
const Bbr2CongestionEvent& congestion_event) {
// Increase inflight_hi by the number of probe_up_bytes within probe_up_acked.
cycle_.probe_up_acked += congestion_event.bytes_acked;
if (cycle_.probe_up_acked >= cycle_.probe_up_bytes) {
uint64_t delta = cycle_.probe_up_acked / cycle_.probe_up_bytes;
cycle_.probe_up_acked -= delta * cycle_.probe_up_bytes;
/*QUIC_DVLOG(3) << sender_ << " Rasing inflight_hi from "
<< model_->inflight_hi() << " to "
<< model_->inflight_hi() + delta * kDefaultTCPMSS
<< ". probe_up_bytes:" << cycle_.probe_up_bytes
<< ", delta:" << delta
<< ", (new)probe_up_acked:" << cycle_.probe_up_acked;*/
model_->set_inflight_hi(model_->inflight_hi() + delta * kDefaultTCPMSS);
}
if (congestion_event.end_of_round_trip) {
RaiseInflightHighSlope();
}
}
cycle_.probe_up_bytes中的更新方式:
void Bbr2ProbeBwMode::RaiseInflightHighSlope() {
uint64_t growth_this_round = 1 << cycle_.probe_up_rounds;
cycle_.probe_up_rounds = std::min<uint64_t>(cycle_.probe_up_rounds + 1, 30);
uint64_t probe_up_bytes = sender_->GetCongestionWindow() / growth_this_round;
cycle_.probe_up_bytes =
std::max<QuicByteCount>(probe_up_bytes, kDefaultTCPMSS);
}
拥塞窗口的约束,headroom是由inflight_hi乘以(1-0.15)获得的。
Limits<QuicByteCount> Bbr2ProbeBwMode::GetCwndLimits() const {
if (cycle_.phase == CyclePhase::PROBE_CRUISE) {
return NoGreaterThan(
std::min(model_->inflight_lo(), model_->inflight_hi_with_headroom()));
}
return NoGreaterThan(std::min(model_->inflight_lo(), model_->inflight_hi()));
}
BBR v2的曲线,同cubic挺像的:
reference
[1] BBR v2: A Model-based Congestion Control
[2] 来自Google持续更新中的TCP BBR v2.0最新进展
[3] bbr simualtion
[4] An evaluation of bottleneck bandwidth and round trip time and its variants