From owner-freebsd-current@FreeBSD.ORG Fri Nov 4 19:50:22 2011 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A2E3F1065673 for ; Fri, 4 Nov 2011 19:50:22 +0000 (UTC) (envelope-from archycho@gmail.com) Received: from mail-vx0-f182.google.com (mail-vx0-f182.google.com [209.85.220.182]) by mx1.freebsd.org (Postfix) with ESMTP id 5A1908FC27 for ; Fri, 4 Nov 2011 19:50:22 +0000 (UTC) Received: by vcbfo14 with SMTP id fo14so2293674vcb.13 for ; Fri, 04 Nov 2011 12:50:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=1DvE0TJ6OqxnD3qX2rBil95N9OApB7w6g009bRluRbU=; b=KFqmlvaXshqLGZiDE3oxcQVDuTVBAKlERBGMck0d7h8r/OjhZfY67an4Wdn8Bv0FEy +aDtnm9VDrbFbs7bUqd3XcLSNGAMuLNx2ydpfzBlkcZKt5IBvtqa3kVG+lmiITIv3hBK kn4EyGP8//UL0nQtunMW5Sa7fpR0lmMTiIYds= MIME-Version: 1.0 Received: by 10.52.33.140 with SMTP id r12mr16467535vdi.36.1320436221551; Fri, 04 Nov 2011 12:50:21 -0700 (PDT) Received: by 10.52.158.72 with HTTP; Fri, 4 Nov 2011 12:50:21 -0700 (PDT) In-Reply-To: References: <4EB432E2.2030402@bluerosetech.com> Date: Sat, 5 Nov 2011 03:50:21 +0800 Message-ID: From: Archy Cho To: freebsd-current@freebsd.org, rizzo@iet.unipi.it Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Subject: Fwd: Netmap for routers (em0 device) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Nov 2011 19:50:22 -0000 ---------- Forwarded message ---------- From: Archy Cho Date: 2011/11/5 Subject: Re: Netmap for routers (em0 device) To: Darren Pilgrim 2011/11/5 Darren Pilgrim > On 2011-11-04 09:06, Archy Cho wrote: > >> Hello >> >> I am very happy to see freebsd could have such network performance >> with netmap , since I am currently using freebsd as core routers >> instead of cisco. >> > > I'm looking at doing the same at a site, but we're not sure about the > performance as it will sit on 10 GE. Would you tell me about: > > - the hardware you use > - previous hardware that didn't work > - functions performed (firewall, VPN, BGP feed, etc.) > - the wire speeds involved > - the bit and packet rates achieved > - the CPU load > > Thanks in advance > I am currently using FreeBSD 7.4-AMD64 seldom under large pps of DDoS the router box ( freebsd ) will drop packets , one of the CPU core getting 100% of swi1:net with command `top -HSP` The following configuration , could only forward around 600Kpps Intel 3420GVP mainboard Intel Xeon X3480 CPU with HT disabled 8GB DDR3-1333 Ram 450GB 10000rpm WDHLHX Intel 82578DM on board as EM0 Intel 82574L on board as EM1 >100 ipfw rules 1Gbps uplink with upstream Quagga as BGP router with 2 upstream providers full routes kern.clockrate: { hz = 1000, tick = 1000, profhz = 2000, stathz = 133 } kern.dcons.poll_hz: 100 kern.hz: 1000 debug.psm.hz: 20 kern.polling.idlepoll_sleeping: 1 kern.polling.stalled: 114 kern.polling.suspect: 5775 kern.polling.phase: 0 kern.polling.enable: 0 kern.polling.handlers: 2 kern.polling.residual_burst: 0 kern.polling.pending_polls: 1 kern.polling.lost_polls: 871230 kern.polling.short_ticks: 1636 kern.polling.reg_frac: 20 kern.polling.user_frac: 10 kern.polling.idle_poll: 0 kern.polling.each_burst: 1000 kern.polling.burst_max: 1000 kern.polling.burst: 1000 dev.em.0.rx_processing_limit=10000000 dev.em.1.rx_processing_limit=10000000 net.inet.ip.forwarding=1 net.inet.ip.fastforwarding=1 net.inet.icmp.icmplim=10485760 net.inet.ip.dummynet.pipe_byte_limit=104857600 net.inet.ip.dummynet.pipe_slot_limit=104857600 net.inet.ip.dummynet.hash_size=65535 net.inet.ip.fw.tables_max=65535 net.inet.ip.fw.dyn_keepalive=1 net.inet.ip.fw.dyn_max=65535 net.inet.ip.fw.dyn_short_lifetime=5 net.inet.ip.fw.dyn_buckets=4096 net.local.stream.recvspace=524288 net.local.stream.sendspace=524288 net.local.dgram.recvspace=614400 net.inet.tcp.sendspace=614400 net.inet.tcp.recvspace=614400 net.inet.udp.recvspace=420800 net.inet.sctp.recvspace=2330160 net.inet.sctp.sendspace=2330160 net.inet.raw.recvspace=614400 net.raw.recvspace=524288 net.raw.sendspace=524288