From owner-freebsd-current@FreeBSD.ORG Fri Aug 25 07:37:50 2006 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 32C2A16A4E1 for ; Fri, 25 Aug 2006 07:37:50 +0000 (UTC) (envelope-from if@hetzner.co.za) Received: from hetzner.co.za (office.cpt2.your-server.co.za [196.7.147.230]) by mx1.FreeBSD.org (Postfix) with ESMTP id B1F3643D49 for ; Fri, 25 Aug 2006 07:37:49 +0000 (GMT) (envelope-from if@hetzner.co.za) Received: from localhost ([127.0.0.1] helo=ian.hetzner.africa) by hetzner.co.za with esmtp (Exim 4.62 (FreeBSD)) (envelope-from ) id 1GGWGA-000C6D-7d for freebsd-current@freebsd.org; Fri, 25 Aug 2006 09:37:46 +0200 To: freebsd-current@freebsd.org From: Ian FREISLICH X-Attribution: BOFH Date: Fri, 25 Aug 2006 09:37:46 +0200 Message-Id: Subject: 802.1Q vlan performance. X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Aug 2006 07:37:50 -0000 Hi While doing some experimentation and work on ipfw to see where I could improve performance for our virtualised firewall I came across the following comment in sys/net/if_vlan.c: * The VLAN_ARRAY substitutes the dynamic hash with a static array * with 4096 entries. In theory this can give a boots(sic) in processing, * however on practice it does not. Probably this is because array * is too big to fit into CPU cache. Being curious and having determined the main throughput bottleneck to be the vlan driver, I thought that I'd test the assertion. I have have 506 vlans on this machine. With VLAN_ARRAY unset, ipfw disabled, fastforwarding enabled, vlanhwtag enabled on the interface, the fastest forwarding rate I could get was 278kpps (This was a steady decrease from 440kpps with 24 vlans linearly proportional to the number of vlans). With exactly the same configuration, but the vlan driver compiled with VLAN_ARRAY defined, the forwarding rate of the system is back at 440kpps. The testbed looks like this: |pkt gen | | router | | pkt rec | | host |vlan2 vlan2 | |vlan1002 vlan1002 | host | |netperf |----------->| |------------------->| netserver| | |em0 em0 | |em1 em0 | | The router has vlan2 to vlan264 and vlan1002 through vlan1264 in 22 blocks of 23 vlan groups (a consequence of 24 port switches to to tag/untag for customers). The pkt gen and recieve host both have 253 vlans. Can anyone suggest a good reason not to turn this option on by default. It looks to me like it dramatically improves performance. Ian -- Ian Freislich