From owner-svn-src-all@FreeBSD.ORG Fri Jun 11 17:31:20 2010 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A18091065675; Fri, 11 Jun 2010 17:31:20 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pw0-f54.google.com (mail-pw0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id 561F78FC12; Fri, 11 Jun 2010 17:31:19 +0000 (UTC) Received: by pwj1 with SMTP id 1so912819pwj.13 for ; Fri, 11 Jun 2010 10:31:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:received:from:date:to:cc :subject:message-id:reply-to:references:mime-version:content-type :content-disposition:in-reply-to:user-agent; bh=rGSK5aDI+Bk3OaBpqSEPh1NmkyEj3FDVt9UYhpxMCFg=; b=Ns/3mrMaSBMxBQCGQfA+7GeXf/vzvdp7iSK9eG0Gxvp0CMBm+K2h4Sab689ymZK+lw y/mRVyQrTE0shlrIUhoNflkHS2GYT2tR+/4kKcmqRL4VxhdHgUZaLsPucHngnV8DH9bm jkvwnWiTqJyNry66d2My+B/7wnDN9FunrrAAY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=UbWsg/94CVGqmJuEA7+vXO3nOQ2tquUSALOOajS9A30Ze2+zpstjPqjcatA+4nBB0d QVCkIgIE7hEFoQkuQL6MNcdH9nXCVLsrUe5O7CWLAkIHEiEhKbV4L8WzJJk9ywP9sHPk FKwk0QbTH2eyxsoJAnce/rnggos9IS8v2ilrc= Received: by 10.115.39.17 with SMTP id r17mr1722182waj.40.1276277479497; Fri, 11 Jun 2010 10:31:19 -0700 (PDT) Received: from pyunyh@gmail.com ([174.35.1.224]) by mx.google.com with ESMTPS id 33sm15827302wad.20.2010.06.11.10.31.18 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 11 Jun 2010 10:31:18 -0700 (PDT) Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Fri, 11 Jun 2010 10:29:57 -0700 From: Pyun YongHyeon Date: Fri, 11 Jun 2010 10:29:57 -0700 To: Scott Long Message-ID: <20100611172957.GB13776@michelle.cdnetworks.com> References: <201006110300.o5B30X9q045387@svn.freebsd.org> <201006110751.40735.jhb@freebsd.org> <853068F6-D736-4DA3-859F-D946D096843D@samsco.org> <19B0DF11-5998-40F5-8095-8D2521B1C597@mac.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, Marcel Moolenaar , src-committers@freebsd.org, John Baldwin Subject: Re: svn commit: r209026 - in head/sys/ia64: ia64 include X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: pyunyh@gmail.com List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jun 2010 17:31:20 -0000 On Fri, Jun 11, 2010 at 11:21:24AM -0600, Scott Long wrote: > On Jun 11, 2010, at 11:04 AM, Marcel Moolenaar wrote: > > > > On Jun 11, 2010, at 9:12 AM, Scott Long wrote: > > > >> On Jun 11, 2010, at 5:51 AM, John Baldwin wrote: > >>> On Thursday 10 June 2010 11:00:33 pm Marcel Moolenaar wrote: > >>>> Author: marcel > >>>> Date: Fri Jun 11 03:00:32 2010 > >>>> New Revision: 209026 > >>>> URL: http://svn.freebsd.org/changeset/base/209026 > >>>> > >>>> Log: > >>>> Bump MAX_BPAGES from 256 to 1024. It seems that a few drivers, bge(4) > >>>> in particular, do not handle deferred DMA map load operations at all. > >>>> Any error, and especially EINPROGRESS, is treated as a hard error and > >>>> typically abort the current operation. The fact that the busdma code > >>>> queues the load operation for when resources (i.e. bounce buffers in > >>>> this particular case) are available makes this especially problematic. > >>>> Bounce buffering, unlike what the PR synopsis would suggest, works > >>>> fine. > >>>> > >>>> While on the subject, properly implement swi_vm(). > >>> > >>> NIC drivers do not handle deferred load operations at all (note that > >>> bus_dmamap_load_mbuf() and bus_dmamap_load_mbuf_sg() enforce BUS_DMA_NOWAIT). > >>> It is common practice to just drop the packet in that case. > >>> > >> > >> Yes, long ago when network drivers started being converted to busdma, it was agreed that EINPROGRESS simply doesn't make sense for them. Any platform that winds up making extensive use of bounce buffers for network hardware is going to perform poorly no matter what, and should hopefully have some sort of IOMMU that can be used instead. > > > > Unfortunately things aren't as simple as is presented. > > > > For one, bge(4) wedges as soon as the platform runs out of bounce > > buffers when they're needed. The box needs to be reset in order to > > get the interface back. I pick any implementation that remains > > functional over a mis-optimized one that breaks. Deferred load > > operations are more performance optimal than failure is. > > > > This sounds like a bug in the bge driver. I don't see if through casual inspection, but the driver should be able to either drop the mbuf entirely, or requeue it on the ifq and then restart the ifq later. > For TX path, bge(4) requeues the TX frame in case of bus_dmamap_load_mbuf_sg(9) failure. For RX path, bge(4) drops received frame and reuses previously loaded RX buffer. If bus_dmamap_load_mbuf_sg(9) always returns EINPROGRESS, bge(4) could not send/receive frames though. > > Also: the kernel does nothing to guarantee maximum availability > > of DMA-able memory under load, so bounce buffers (or use of I/O > > MMUs for that matter) are a reality. Here too the performance > > argument doesn't necessarily hold because the kernel may be > > busy with more than just sending and receiving packets and the > > need to defer load operations is very appropriate. If the > > alternative is just dropped packets, I'm fine with that too, > > but I for one cannot say that *not* filling a H/W ring with > > buffers is not going to wedge the hardware in some cases. > > > > Plus: SGI Altix does not have any DMA-able memory for 32-bit > > hardware. The need for an I/O MMU is absolute and since there > > are typically less mapping registers than packets on a ring, > > the need for deferred operation seems quite acceptable if the > > alternative is, again, failure to operate. > > > > I'm not against you upping the bounce buffer limit for a particular platform, but it's still unclear to me if (given bug-free drivers) it's worth the effort to defer a load rather than just drop the packet and let the stack retry it. One question that would be good to answer is wether the failed load is happening in the RX to TX path. > > Scott >