From owner-freebsd-net@FreeBSD.ORG Thu Feb 23 06:54:37 2012 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 74522106564A; Thu, 23 Feb 2012 06:54:37 +0000 (UTC) (envelope-from zbeeble@gmail.com) Received: from mail-bk0-f54.google.com (mail-bk0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 9D4718FC15; Thu, 23 Feb 2012 06:54:36 +0000 (UTC) Received: by bkcjg1 with SMTP id jg1so1022340bkc.13 for ; Wed, 22 Feb 2012 22:54:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=js53BQ1MdzOwPD4HrTs5EtfOvOMXCud7gkl4qPic+ig=; b=tsMSZPgUrZXzYvV1rfUyxUHFc+8uP6hJZ2Ye8O0Pp001+U94BIg8+L0pic84y7vGaF LTGoJoO/fW8PucqfLGg9bTwAAuBUgHmyCduSCIX4e9JjrNhJO12TLuqGbF1ZOC4ncmlO VYIUwI2E4/CqXTSbfxkcM5/IztoeNKiTePrIU= MIME-Version: 1.0 Received: by 10.204.145.145 with SMTP id d17mr8899bkv.77.1329978449153; Wed, 22 Feb 2012 22:27:29 -0800 (PST) Received: by 10.204.152.86 with HTTP; Wed, 22 Feb 2012 22:27:29 -0800 (PST) In-Reply-To: References: <20120222205231.GA81949@onelab2.iet.unipi.it> <1329944986.2621.46.camel@bwh-desktop> <20120222214433.GA82582@onelab2.iet.unipi.it> Date: Thu, 23 Feb 2012 01:27:29 -0500 Message-ID: From: Zaphod Beeblebrox To: Jack Vogel Content-Type: text/plain; charset=ISO-8859-1 Cc: Ben Hutchings , FreeBSD Net , Luigi Rizzo , re , FreeBSD stable Subject: Re: nmbclusters: how do we want to fix this for 8.3 ? X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Feb 2012 06:54:37 -0000 It could do some good to think of the scale of the problem and maybe the driver can tune to the hardware. First, is 8k packet buffers a reasonable default on a GigE port? Well... on a GigE port, you could have from 100k pps (packets per second) at 1500 bytes to 500k pps at around 300 bytes to truly pathological rates of packets (2M pps at the Ethernet-minimum of 64 bytes). 8k buffers vanish in 1/10th of a second in the 1500 byte case and that doesn't even really speak to the buffers getting emptied by other software. Do you maybe want to have a switch whereby the GigE port is in performance or non-performance mode? Do you want to assume that systems with GigE ports are also not pathologically low in memory? Perhaps in 10 or 100 megabit mode, the driver should make smaller rings? For that matter, if mbufs come in a page's worth at a time, what's the drawback of scaling them up and down with network vs. memory vs. cache pressure?