Now let’s look at a typical policy that might be attached to a logical interface (a construct referred to as shape on parent or queue on child):
police cir 100m
shape average 900m
encaps dot1q 100
service-policy out parent100
In this construct, you are required to configure a shaper in the parent policy(shape average 900M). The original intent of this construct was to apportion bandwidth to each logical interface. We consider the shape (Max) rate to be the bandwidth owned by that logical interface and allow the child policy to apportion bandwidth within that owned share.
One useful application of this construct is to condition traffic for a remote site. For example, let's say that your corporate hub has a GigabitEthernet link but is sending traffic to a remote branch with a T1 connection. You want to send traffic at the rate the remote branch can receive it. To avoid potentially dropping packets in the provider device that offers service to that branch, you would configure the parent shaper at a T1 rate and queue packets on the hub. This maintains control of what is forwarded initially if that branch link were a congestion point.
Customers have asked to over-provision the shapers on logical interfaces (representing either individual subscribers or remote sites). The assumption is that all logical interfaces would not necessarily be active at all times. As we want to cap the throughput of an individual subscriber, we don’t want to waste bandwidth if an individual logical interface is not consuming its full allocated share.
So, do we oversubscribe? If yes, to provide fairness under congestion thru excess weight values, you should configure a bandwidth remaining ratio in the parent. Furthermore, be aware of what service any individual logical interface would receive under congestion.
Returning to the configuration, here is the resultant hierarchy:
Figure 24. Shape on Parent / Queue on Child Construct
As stated, a child policy defines bandwidth sharing within the logical interface. We usually refer to the queues here (voice, etc.) as class queues
(with treatment defined by classes within the policy-map) and the schedule at this layer as the class layer schedule.
In the parent policy we define a parent shaper (Max: 900M) and also the implicit bandwidth share of ‘1’ (Ex: 1). Observe that the QoS configuration does not explicitly specify where we should graft this logical interface to the existing interface hierarchy (note the un-attached schedule entry) and the router must know which physical interface a logical interface is associated with to determine where to build the hierarchy.
For a policy on a VLAN, it is evident which interface is involved - we attach the (logical interface) policy in the subinterface configuration. For other interface types (e.g., a tunnel interface), we may need to examine routing information to determine the egress physical interface for that particular logical interface.
Figure 25. Existing Interface Hierarchy (The World Before the Graft)
After we know which interface is involved, we can modify the hierarchy for that interface. First we create a schedule (the logical interface aggregation) that will serve as a grafting spot for the logical interface hierarchy defined in the shape on parent (or queue on child) policy.
Initially, the interface schedule had a single child, the interface default queue. Now, we create a second child, the logical interface aggregation schedule. Observe how the excess weight for this schedule matches that of the interface default queue – it defaults to ‘1’ as always.
Figure 26. Existing Interface Hierarchy (The World After the Graft)
Notice that in the shape on parent policy, we have only class-default with a child policy:
shape average 900m
This is a special case where we just define a schedule entry rather than create a schedule for this policy. We refer to this entity as a collapsed class-default.
To grasp the significance of this concept, let's add a policy to another VLAN (VLAN200). (Relative to the policy-map parent100 listed at the beginning of the topic, we have added asterisks):
police cir 100m
shape average 900m ****
bandwidth remaining ratio 2
encaps dot1q 200
service-policy out parent200
The complete scheduling hierarchy would now look as follows:
Figure 27. A Complete Hierarchical Scheduling Framework to Handle Congestion and avoid Wasting Bandwidth
Observe that in the second parent policy (the policy to VLAN200) we specified a bandwidth remaining ratio of 2, controlling fairness between VLANS. Recall from the Qos Scheduling chapter the existence of peers in the parent policy of flat policies, which enable us to use either the bandwidth remaining ratio or bandwidth remaining percent command to specify the excess weight. In the shape on parent policy construct no peers exist. When you configure a QoS policy-map, QoS cannot know what will materialize as peers in the logical interface aggregation schedule. So, neither the bandwidth remaining ratio nor the bandwidth remaining percent command is supported.
This complete scheduling hierarchy truly highlights the benefits of the Cisco Modular QoS CLI (MQC) and the Hierarchical Scheduling Framework (HQF). For any given interface, the hierarchy is deterministic; we know clearly which packet will be forwarded next. As we have schedules to handle all congestion points, no bandwidth is wasted regardless of where congestion may occur.