From owner-freebsd-amd64@FreeBSD.ORG Wed Oct 26 11:33:34 2005 Return-Path: X-Original-To: freebsd-amd64@freebsd.org Delivered-To: freebsd-amd64@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id BCEC216A426; Wed, 26 Oct 2005 11:33:34 +0000 (GMT) (envelope-from jc@oxado.com) Received: from mars.interactivemediafactory.net (mars.imfeurope.net [194.2.222.161]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7C37F43D49; Wed, 26 Oct 2005 11:33:32 +0000 (GMT) (envelope-from jc@oxado.com) Received: from JC-8600.oxado.com (localhost [127.0.0.1]) by mars.interactivemediafactory.net (8.12.11/8.12.11) with ESMTP id j9QBXMrg007119; Wed, 26 Oct 2005 13:33:26 +0200 (CEST) (envelope-from jc@oxado.com) Message-Id: <6.2.3.4.0.20051026131012.03a80a20@pop.interactivemediafactory.net> X-Mailer: QUALCOMM Windows Eudora Version 6.2.3.4 Date: Wed, 26 Oct 2005 13:33:19 +0200 To: freebsd-amd64@freebsd.org From: Jacques Caron In-Reply-To: <6.2.3.4.0.20051025171333.03a15490@pop.interactivemediafact ory.net> References: <6.2.3.4.0.20051025171333.03a15490@pop.interactivemediafactory.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed Cc: scottl@freebsd.org, sos@freebsd.org Subject: Re: busdma dflt_lock on amd64 > 4 GB X-BeenThere: freebsd-amd64@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Porting FreeBSD to the AMD64 platform List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Oct 2005 11:33:34 -0000 Hi all, Continuing on this story... [I took the liberty of CC'ing Scott and Soren], pr is amd64/87977 though it finally isn't amd64-specific but >4GB-specific. There is really a big problem somewhere between ata and bus_dma for boxes with more than 4 GB RAM and more than 2 ata disks: * bounce buffers will be needed * ata will have bus_dma allocate bounce buffers: hw.busdma.zone1.total_bpages: 32 hw.busdma.zone1.free_bpages: 32 hw.busdma.zone1.reserved_bpages: 0 hw.busdma.zone1.active_bpages: 0 hw.busdma.zone1.total_bounced: 27718 hw.busdma.zone1.total_deferred: 0 hw.busdma.zone1.lowaddr: 0xffffffff hw.busdma.zone1.alignment: 2 hw.busdma.zone1.boundary: 65536 * if I do a dd with a bs=256000, 16 bounce pages will be used (most of the time). As long as I stay on the same disk, no more pages will be used. * as soon as I access another disk (e.g. with another dd with the same bs=256000), another set of 16 pages will be used (bus_dma tags and maps are allocated on a per-channel basis), and all 32 bounce pages will be used (most of the time) * and if I try to access a third disk, more bounce pages are needed and: - one of ata_dmaalloc calls to bus_dma_tag_create has ALLOCNOW set - busdma_machdep will not allocate more bounce pages in that case (the limit is imposed by maxsize in that situation, which has already been reached) - ata_dmaalloc will fail - but some other bus_dma_tag_create call without ALLOCNOW set will still cause bounce pages to be allocated, but deferred, and the non-existent lockfunc to be called, and panic. Adding the standard lockfunc will (probably) solve the panic issue, but there will still be a problem with DMA in ata. The same problems most probably exist with many other drivers. I think we thus have two issues: - providing a lockfunc in nearly all bus_dma_tag_create calls (or have a better default than a panic) - allocating more bounce pages when needed in the ALLOCNOW case (with a logic similar to that used to allocate bounce pages in the non-ALLOCNOW case) Thoughts? Jacques.