Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 24 Jan 2005 08:55:40 -0600
From:      Nick Buraglio <nick@buraglio.com>
To:        "Shane James" <shane@phpboy.co.za>
Cc:        freebsd-pf@freebsd.org
Subject:   Re: PF/ALTQ Issues
Message-ID:  <03DF541E-6E18-11D9-AF54-000D93B6DEE8@buraglio.com>
In-Reply-To: <003801c501f4$faf3b210$310a0a0a@phpboy>
References:  <003801c501f4$faf3b210$310a0a0a@phpboy>

next in thread | previous in thread | raw e-mail | index | archive | help
I've found that there is a severe lack of documentation for hfsc.  
Anyway, when testing my QoS rules the only reliable method that I've 
found is iperf.  It'll give you a realtime report on what you're 
actually trying to do, measure traffic and it supports multiple 
streams, multicast, etc.  MRTG is recording an average if I'm not 
mistaken, which can sometimes be deceiving.  I'm not questioning what 
there may be some strangeness in the hfsc queuing discipline code (I've 
been meaning to start using it since it seems very powerful and does 
exactly what I need) but in my humble opinion there isn't a much better 
way to measure true throughput.  iperf is available in the ports 
collection.  I know that really doesn't answer your question, but maybe 
it'll help get some better troubleshooting data.

nb

On Jan 24, 2005, at 3:13 AM, Shane James wrote:

> I'm running FreeBSD 5.3-Stable. The only change I've made in the 
> generic kernel is added the following options.
>
> device          pf
> device          pflog
> device          pfsync
>
> options         ALTQ
> options         ALTQ_CBQ        # Class Bases Queueing
> options         ALTQ_RED        # Random Early Drop
> options         ALTQ_RIO        # RED In/Out
> options         ALTQ_HFSC       # Hierarchical Packet Scheduler
> options         ALTQ_CDNR       # Traffic conditioner
> options         ALTQ_PRIQ       # Prioirity Queueing
>
> this Box is a P4-2.4ghz + 512Mb RAM
>
> Here is a drop of 'netstat -m'
> 270 mbufs in use
> 267/32768 mbuf clusters in use (current/max)
> 0/3/4496 sfbufs in use (current/peak/max)
> 601 KBytes allocated to network
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
> 0 calls to protocol drain routines
>
> Just to show that I'm not maxing out on my mbuf's
>
> Please Excuse how uneat the ALTQ limits/rules are, but I've been 
> playing around quite a bit with this, to try and solve the issue.
>
> #tables
> table <zaips> persist file "/etc/zaips" - All South African Routes(My 
> home country)
> table <sodium> { 196.23.168,136, 196.14.164.130, 196.46.187.69 }
>
> #############################
> # AltQ on Uplink Interface
> #############################
> altq on $uplink_if hfsc bandwidth 100Mb queue { dflt_u, lan_u, 
> local_u, intl_u, monitor_u }
>         queue dflt_u bandwidth 64Kb hfsc(default realtime 512Kb 
> upperlimit 512Kb)
>         queue lan_u bandwidth 10Mb hfsc(realtime 10Mb upperlimit 10Mb)
>         queue monitor_u bandwidth 64Kb hfsc(realtime 256Kb upperlimit 
> 256Kb)
>
> queue local_u bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_u_l, 
> blueworld-l_u, mail_u_l, unix_u_l }
>         queue windows_u_l bandwidth 64Kb hfsc(realtime 192Kb 
> upperlimit 320Kb)
>         queue blueworld-l_u bandwidth 64Kb hfsc(realtime 64Kb 
> upperlimit 192Kb)
>         queue mail_u_l bandwidth 64Kb hfsc(realtime 256Kb upperlimit 
> 320Kb)
>         queue unix_u_l bandwidth 256Kb hfsc(realtime 256Kb upperlimit 
> 256Kb)
>
> queue intl_u bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_u_i, 
> blueworld_u_i, mail_u_i, unix_u_i }
>         queue windows_u_i bandwidth 64Kb hfsc(upperlimit 64Kb)
>         queue blueworld_u_i bandwidth 64Kb hfsc(upperlimit 64Kb)
>         queue mail_u_i bandwidth 64Kb hfsc(realtime 64Kb upperlimit 
> 64Kb)
>         queue unix_u_i bandwidth 64Kb hfsc(upperlimit 64Kb)
>
> #############################
> # AltQ on Hosting Interface
> #############################
> altq on $hosting_if hfsc bandwidth 100Mb queue { dflt_d, lan_d, 
> local_d, intl_d, sodium_d }
>         queue dflt_d bandwidth 64Kb hfsc(default realtime 512Kb 
> upperlimit 512Kb)
>         queue lan_d bandwidth 10Mb hfsc(realtime 10Mb upperlimit 10Mb)
>
> queue local_d bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_ld, 
> monitor_d, blueworld_ld, mail_d_l, unix_d_l }
>         queue windows_ld bandwidth 64Kb hfsc(realtime 192Kb upperlimit 
> 256Kb)
>         queue monitor_d bandwidth 64Kb hfsc(realtime 256Kb upperlimit 
> 256Kb)
>         queue blueworld_ld bandwidth 64Kb hfsc(realtime 64Kb 
> upperlimit 128Kb)
>         queue mail_d_l bandwidth 64Kb hfsc(realtime 256Kb upperlimit 
> 320Kb)
>         queue unix_d_l bandwidth 256Kb hfsc(realtime 256Kb upperlimit 
> 256Kb)
>
> queue intl_d bandwidth 10Mb hfsc(upperlimit 10Mb) { windows_d_i, 
> monitor_d_i, blueworld_d_i, mail_d_i, unix_d_i }
>         queue windows_d_i bandwidth 64Kb hfsc(realtime 64Kb upperlimit 
> 64Kb)
>         queue monitor_d_i bandwidth 64Kb hfsc(upperlimit 64Kb)
>         queue blueworld_d_i bandwidth 64Kb hfsc(realtime 32Kb 
> upperlimit 64Kb)
>         queue mail_d_i bandwidth 64Kb hfsc(upperlimit 64Kb)
>         queue unix_d_i bandwidth 64Kb hfsc(upperlimit 64Kb)
>
>
> Here is an example of how I'm assigning the traffic to one of the 
> queue's
>
> #International Queue's
> pass out on $uplink_if from <sodium> to any keep state queue mail_u_i
> pass out on $hosting_if from any to <sodium> keep state queue mail_d_i
>
> #Local Queue's
> pass out on $uplink_if from <sodium> to <zaips> keep state queue 
> mail_u_l
> pass out on $hosting_if from <zaips> to <sodium> keep state queue 
> mail_d_l
>
> Also, I am running Intel Pro 100 S(Intel Ethernet Express) Server 
> Cards on either interface. both cards have been swapped to confirm 
> that it's not a hardware related issue. Which it's not.
>
> 'pfctl -vsq' for these 4 queue's:
>
>
> queue   mail_u_l bandwidth 256Kb hfsc( realtime 256Kb upperlimit 256Kb 
> )
>   [ pkts:       3592  bytes:    3624366  dropped pkts:      0 bytes:   
>    0 ]
> --
>
> queue   mail_u_i bandwidth 64Kb hfsc( realtime 64Kb upperlimit 64Kb )
>   [ pkts:       1277  bytes:     230620  dropped pkts:      0 bytes:   
>    0 ]
> --
>
> queue   mail_d_l bandwidth 256Kb hfsc( realtime 256Kb upperlimit 256Kb 
> )
>   [ pkts:       3933  bytes:     856087  dropped pkts:      0 bytes:   
>    0 ]
> --
> queue   mail_d_i bandwidth 64Kb hfsc( upperlimit 64Kb )
>   [ pkts:       1185  bytes:    1559939  dropped pkts:      0 bytes:   
>    0 ]
>
>
> Now, here is the issue.
> With All queue's that I add. Upstream($uplink_if) Bandwidth goes quite 
> a bit slower that it's suppose to. DownStream($hosting_if) runs at the 
> correct speeds and sometime even more that I've assigned to it. 
> Another strange thing though is that fact that I don't always think 
> that it's assigning all traffic to the correct queue's some times it 
> uses more bandwidth than I've assigned to the queue and some times it 
> uses a lot less. Despite the fact that it as all the bandwidth to it's 
> disposal at test time.
>
> the way I've been measuring the usage is through MRTG and lftp to 
> hosts on my peering network.
>
> Any Help Would be much apprecatied.
>
> Kind Regards,
> Shane James
> VirTek - http://www.virtek.co.za
> O: 0861 10 1107
> M: +27 (0) 82 786 3878
> F: +27 (0) 11 388 5626
> _______________________________________________
> freebsd-pf@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-pf
> To unsubscribe, send any mail to "freebsd-pf-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?03DF541E-6E18-11D9-AF54-000D93B6DEE8>