Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 04 Aug 2012 18:51:05 -0500
From:      CyberLeo Kitsana <cyberleo@cyberleo.net>
To:        freebsd-pf@freebsd.org
Subject:   AltQ nested classes and limits
Message-ID:  <501DB569.4030700@cyberleo.net>

next in thread | raw e-mail | index | archive | help
Hi!

I'm currently struggling with a little issue with pf and AltQ cbq in
FreeBSD 8.2-RELEASE.

I'm trying to set up queueing with two different ISP uplinks attached to
my gateway. Note that I am not trying to multihome the machine.

The machine in question only has two interfaces, so those are trunked to
an 8-port managed switch as vlans 1 through 6; the primary link's modem
is plugged into vlan 5, and the secondary into vlan 6. All this is
working fine.

(Only one interface is attached to the trunk in this snapshot; the other
is being used for system access while I get this sorted out.)

----8<----
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	options=38d8<VLAN_MTU,VLAN_HWTAGGING,POLLING,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC>
	ether 00:01:80:79:fc:5a
	inet6 fe80::201:80ff:fe79:fc5a%lagg0 prefixlen 64 scopeid 0xa
	inet 192.168.2.1 netmask 0xffffff00 broadcast 192.168.2.255
	nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
	media: Ethernet autoselect
	status: active
	laggproto lacp
	laggport: re0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
vlan5: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	ether 00:01:80:79:fc:5a
	inet6 fe80::222:68ff:fe8e:e0fe%vlan5 prefixlen 64 scopeid 0xf
	inet 216.80.73.130 netmask 0xfffffff8 broadcast 216.80.73.135
	inet 216.80.73.131 netmask 0xffffffff broadcast 216.80.73.131
	inet 192.168.100.2 netmask 0xfffffffc broadcast 192.168.100.3
	nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
	media: Ethernet autoselect
	status: active
	vlan: 5 parent interface: lagg0
vlan6: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
	ether 00:01:80:79:fc:5a
	inet6 fe80::222:68ff:fe8e:e0fe%vlan6 prefixlen 64 scopeid 0x10
	inet 216.36.125.42 netmask 0xfffffff8 broadcast 216.36.125.47
	inet 216.36.125.43 netmask 0xffffffff broadcast 216.36.125.43
	inet 192.168.1.2 netmask 0xfffffffc broadcast 192.168.1.3
	nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
	media: Ethernet autoselect
	status: active
	vlan: 6 parent interface: lagg0
----8<----

Since AltQ refuses to function on vlan virtual interfaces, I have
instead attached a hierarchy of classes to the parent interface (lagg0),
and set up rules to classify packets into the queues according to pf
tags and the egress interface. This is also working fine, and the
packets are queued appropriately.

----8<----
altq on lagg0 bandwidth 1Gb cbq queue { defq vlan5 vlan6 }
queue defq  bandwidth 64Kb cbq(rio, ecn, default)
queue vlan5 bandwidth 4700Kb cbq(rio, ecn) { vlan5_phone, vlan5_ack,
vlan5_ssh, vlan5_dflt, vlan5_bulk, vlan5_down }
  queue vlan5_phone bandwidth 32Kb  priority 7 cbq(rio, ecn, borrow)
  queue vlan5_ack   bandwidth 32Kb  priority 6 cbq(rio, ecn, borrow)
  queue vlan5_ssh   bandwidth 128Kb priority 5 cbq(rio, ecn, borrow)
  queue vlan5_dflt   bandwidth 8Kb   priority 4 cbq(rio, ecn, borrow)
  queue vlan5_bulk  bandwidth 8Kb   priority 2 cbq(rio, ecn, borrow)
  queue vlan5_down  bandwidth 8Kb   priority 0 cbq(rio, ecn, borrow)
queue vlan6 bandwidth 600Kb cbq(rio, ecn) { vlan6_phone, vlan6_ack,
vlan6_ssh, vlan6_dflt, vlan6_bulk, vlan6_down }
  queue vlan6_phone bandwidth 32Kb  priority 7 cbq(rio, ecn, borrow)
  queue vlan6_ack   bandwidth 32Kb  priority 6 cbq(rio, ecn, borrow)
  queue vlan6_ssh   bandwidth 128Kb priority 5 cbq(rio, ecn, borrow)
  queue vlan6_dflt   bandwidth 8Kb   priority 4 cbq(rio, ecn, borrow)
  queue vlan6_bulk  bandwidth 8Kb   priority 2 cbq(rio, ecn, borrow)
  queue vlan6_down  bandwidth 8Kb   priority 0 cbq(rio, ecn, borrow)
----8<----

What completely fails is my attempts to limit the bandwidth towards each
of the modems. It seems that the second-level child classes are simply
ignoring the parent class and borrowing directly from root instead,
despite any hierarchies or bandwidth limits in place. This frequently
results in queue suspends when they cannot drain fast enough, at which
point all traffic ceases for a minute or so.

----8<----
queue root_lagg0 on lagg0 bandwidth 1Gb priority 0 cbq( wrr root )
{defq, vlan5, vlan6}
  [ pkts:     169394  bytes:   99190404  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:   136.8 packets/s, 364.26Kb/s ]
...
queue  vlan5 on lagg0 bandwidth 4.70Mb cbq( red ecn rio ) {vlan5_phone,
vlan5_ack, vlan5_ssh, vlan5_dflt, vlan5_bulk, vlan5_down}
  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:     0.0 packets/s, 0 b/s ]
queue   vlan5_phone on lagg0 bandwidth 32Kb priority 7 cbq( red ecn rio
borrow )
  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:     0.0 packets/s, 0 b/s ]
queue   vlan5_ack on lagg0 bandwidth 32Kb priority 6 cbq( red ecn rio
borrow )
  [ pkts:      45696  bytes:    2750286  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows:   3590  suspends:      0 ]
  [ measured:    30.3 packets/s, 14.45Kb/s ]
queue   vlan5_ssh on lagg0 bandwidth 128Kb priority 5 cbq( red ecn rio
borrow )
  [ pkts:        399  bytes:      26482  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:     0.5 packets/s, 261.29 b/s ]
queue   vlan5_dflt on lagg0 bandwidth 8Kb priority 4 cbq( red ecn rio
borrow )
  [ pkts:     115694  bytes:   91758450  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows: 115687  suspends:     17 ]
  [ measured:    98.6 packets/s, 309Kb/s ]
queue   vlan5_bulk on lagg0 bandwidth 8Kb priority 2 cbq( red ecn rio
borrow )
  [ pkts:         55  bytes:       8494  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:     0.0 packets/s, 0 b/s ]
queue   vlan5_down on lagg0 bandwidth 8Kb priority 0 cbq( red ecn rio
borrow )
  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:
  0 ]
  [ qlength:   0/ 50  borrows:      0  suspends:      0 ]
  [ measured:     0.0 packets/s, 0 b/s ]
----8<----

Does anyone here have experience with such a setup? Do I have incorrect
expectations, or a flawed implementation? Is this a known issue with the
AltQ implementation in FreeBSD 8.2?

I can provide further information upon request.

Thank you.

-- 
Fuzzy love,
-CyberLeo
Technical Administrator
CyberLeo.Net Webhosting
http://www.CyberLeo.Net
<CyberLeo@CyberLeo.Net>

Furry Peace! - http://wwww.fur.com/peace/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?501DB569.4030700>