One of the most common questions that CCIE candidates face when studying QOS MQC for the lab is “What is the difference between bandwidth percent and bandwidth remaining percent?”. Both are used in CBWFQ when implementing congestion management but what is the difference? The answer to this question is the focus of this tutorial.
We will be using the following topology for this tutorial:

The Scenario:
“The users on the 1.1.1.0/24 subnet have been complaining of slow network response times recently when connecting to services on the 2.2.2.0/24. Users on the 1.1.1.0/24 subnet use voice and web applications. During times of congestion we want the web applications to be guaranteed bandwidth. Also, delay sensitive Voice traffic needs to be given priority over all other traffic.”
The scenario above is typical of something you might see in your environment. We have congestion happening on an interface on the ethernet link between R1 and R2. We need to implement a congestion management strategy during times of congestion so that voice and web traffic are given traffic guarantees. Lets say that we want to reserve at least 20% of the bandwidth during times of congestion for web traffic. Lets implement that first and we’ll get back to voice traffic later.
First lets just do a quick check of bandwidth of R1:
R1(config)#int f1/0
R1(config-if)#fair-queue 
R1#sh queueing int f1/0
Interface FastEthernet1/0 queueing strategy: fair
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: weighted fair
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/256 (active/max active/max total)
     Reserved Conversations 0/0 (allocated/max allocated)
     Available Bandwidth 75000 kilobits/sec
You can see above the available bandwidth when you implement a fair-queueing on an interface is 75,000 kbs. Wait a second, this is a fast ethernet interface! Shouldn’t the available bandwidth be 100,000 kbs (100 Mbs). Why are we operating at only 75%? What happened to the other 25%?
We’ve implemented fair queueing on the interface between R1 and R2. Now, fair queing is a congestion management strategy but probably not the congestion management strategy we want to use for this particular scenario. For the moment, we just want to see the affect of any congestion management strategy on an interface and set a baseline for this tutorial.
By default, when a congestion management strategy like Fair Queueing or CBWFQ is implemented on an interface 25% is reserved for things like routing protocol updates and important layer 2 traffic. 25% of 100 mbs is quite a lot for routing updates! Let’s change this:
R1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#int f1/0
R1(config-if)#max-reserved-bandwidth 100
R1(config-if)#end
R1#sh queueing int f1/0
Interface FastEthernet1/0 queueing strategy: fair
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: weighted fair
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/256 (active/max active/max total)
     Reserved Conversations 0/0 (allocated/max allocated)
     Available Bandwidth 100000 kilobits/sec
You can see above that we used the  max-reserved-bandwidth command so that we can use 100% of the interface bandwidth (ie. none of the bandwidth is reserved).
Now that we’ve set up the bandwidth lets set up CBWFQ. We’ll reserve 20% of the bandwidth for web traffic during times of congestion:
class-map match-any WEB
 match protocol http
!
policy-map QOS
 class WEB
  bandwidth percent 20
!
interface FastEthernet1/0
 no fair-queue
 max-reserved-bandwidth 100
 service-policy output QOS
You can see above we are using NBAR to match the http protocol, and reserving 20% of the link bandwidth when congestion occurs using the  bandwidth percent command.
Let’s verify this:
R1#sh queueing int f1/0
Interface FastEthernet1/0 queueing strategy: fair
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/256 (active/max active/max total)
     Reserved Conversations 1/1 (allocated/max allocated)
     Available Bandwidth 80000 kilobits/sec
You can see above that 80,000 kbs is now available. This doesn’t mean that http traffic can  ONLY use 20% of the link (the bandwidth command does not have a built in policer). The bandwidth command only comes into play  when there is congestion on the interface. This configuration is telling IOS that when congestion occurs, keep a  minimum of 20% (20,000 kbs) of the bandwidth for http traffic.
The  show queueing interface command shows that when congestion occurs 80% of the bandwidth is available for use for everything else.
Lets add a reservation for voice:
class-map match-any WEB
 match protocol http
class-map match-any VOICE
 match protocol rtp audio
 match protocol rtcp
!
policy-map QOS
 class WEB
  bandwidth percent 20
 class VOICE
  priority percent 10
What we’ve done here is add a LLQ (low latency queue) for voice traffic. We are using NBAR to match the voice rtp stream and control protocol. The  priority command sets up a low latency queue for the voice class. What this means is that voice traffic will be served before all other traffic. The priority command implements a built-in policer. This means that voice traffic will be served first, with a bandwidth gaurantee upto a  maximum of 10% of the interface when there is congestion. This is to stop the LLQ starving all the other queues of traffic.
R1#sh queueing int f1/0
Interface FastEthernet1/0 queueing strategy: fair
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/256 (active/max active/max total)
     Reserved Conversations 1/1 (allocated/max allocated)
     Available Bandwidth 70000 kilobits/sec
You can see here the available bandwidth has been changed to 70% available bandwidth (70,000 kbs). This makes sense, we are using 10% for voice and 20% for web traffic.
Bandwidth Remaining Percent
So what does the  bandwidth remaining percent do then? We will change the bandwidth reservation of the web class so it uses the bandwidth remaining percent command and see the effect on the show queueing interface command.
class-map match-any WEB
 match protocol http
class-map match-any VOICE
 match protocol rtp audio
 match protocol rtcp
!
policy-map QOS
 class WEB
  bandwidth remaining percent 20
 class VOICE
  priority percent 10
R1#sh queueing int f1/0
Interface FastEthernet1/0 queueing strategy: fair
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/256 (active/max active/max total)
     Reserved Conversations 1/1 (allocated/max allocated)
     Available Bandwidth 90000 kilobits/sec
You can see above we have changed the bandwidth percent command to a bandwidth remaining percent. Take a look at that available bandwith. 90% is available?! Didn’t we just make a reservation for web traffic of 20%?
The bandwidth remaining percent command makes a reservation from the  available bandwidth not the total reservable bandwidth. What we have done here is reserved 10% of the total reservable bandwidth for voice traffic. This leaves us 90,000 kbs left when congestion occurs, this is the  available bandwidth. Using the bandwidth remaining percent command we have made a 20% reservation of this remainder (available bandwidth) for http (20% of 90,000 kbs is 18,000 kbs for http).
The  bandwidth remaining percent command takes a percentage of the  available bandwidth not from the total reservable bandwidth (100% of the interface). The  bandwidth percent command takes a percentage of the  total reserveable bandwidth.
Lets change the max-reserved-bandwidth and see the effect this has on available bandwidth.
R1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#int f1/0
R1(config-if)#max-reserved-bandwidth 90
Reservable bandwidth is being reduced.
Some existing reservations may be terminated.  
You can see above we have changed the bandwidth available command using the max-reserved bandwidth command to 90%. This means that 10% of the total reservable bandwidth has been reserved for routing protocols etc.
R1#sh queueing int f1/0
Interface FastEthernet1/0 queueing strategy: fair
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/256 (active/max active/max total)
     Reserved Conversations 1/1 (allocated/max allocated)
     Available Bandwidth 80000 kilobits/sec
So we have both the VOICE traffic and MAX-RESERVABLE-TRAFFIC (routing, layer 2 traffic) using 10% of the  total reservable bandwidtheach. This leaves us with 80,000 kbs (80%) as a vailable bandwidth. WEB traffic then takes 20% of this 80,000 kbs (16,000 kbs).
In summary, we have the total reservable bandwidth of 100%. The bandwidth percent, priority percent, and max-reservable-bandwidth command makes reservations from the total reservable bandwidth. The bandwidth remaining percent command makes reservations from the available bandwidth (whats left over of the total reservable bandwidth after the other reservations is treated as 100%).
HTH. Now back to labs.
Summary:
  • By default, 25% of the total reservable interface bandwidth is reserved for routing protocols and important layer 2 traffic. The max-reservable-bandwidth command is used to change the amount of bandwidth reserved for this traffic.
  • The bandwidth percent and priority percent commands makes reservations from the total reservable interface bandwidth
  • The bandwidth remaining percent command makes reservations from the available bandwidth (whats left over of the total reservable bandwidth after the other reservations are made).