From owner-freebsd-hackers@FreeBSD.ORG Wed Oct 21 16:42:47 2009 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ECB31106566C for ; Wed, 21 Oct 2009 16:42:47 +0000 (UTC) (envelope-from jason.harmening@gmail.com) Received: from mail-fx0-f210.google.com (mail-fx0-f210.google.com [209.85.220.210]) by mx1.freebsd.org (Postfix) with ESMTP id 8670E8FC20 for ; Wed, 21 Oct 2009 16:42:47 +0000 (UTC) Received: by fxm6 with SMTP id 6so7588014fxm.43 for ; Wed, 21 Oct 2009 09:42:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=/MxZ0LzQ3CpbORNUzrpa9zjzzUq/MTPzsUCM8DBW61A=; b=rhaV0MfflolRnESWbw4HsQJS915kp6htYr0tleMyGbIvbtOQ1ekzZhWdUesm1KrTCs q06T5Khhx7PoT+WxXK5xq2FR5PEQal7akEj0qHZFS3nz68+mUC39bRwjalWxsyMAAAfu Q0BiX/nIM2FKR3ZhkEFH8Gj+3bWh/PJqlLwBo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=apKi5EPzGkT7YLpXC4wNkz4mqGOb78b2n26rGTb7rBAIIiVDhd75BcPcD7ORnedvGB vLmQn+BrkHiUCyRf/ICsURLn9jATSNbn9YweAx2Cfn2IRHkOKZzy6CTc3f3ruElQbHrO 0l0vU1eiFFZNLsQ/Heke/dDZkOD0zEMumY7iQ= MIME-Version: 1.0 Received: by 10.223.143.73 with SMTP id t9mr1575837fau.89.1256142075691; Wed, 21 Oct 2009 09:21:15 -0700 (PDT) Date: Wed, 21 Oct 2009 11:21:15 -0500 Message-ID: <2d1264630910210921w4a2fabb1h86e12658a3f7c714@mail.gmail.com> From: Jason Harmening To: freebsd-hackers@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Mailman-Approved-At: Wed, 21 Oct 2009 17:29:42 +0000 Subject: multi-seg bus_dmamem_alloc? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Oct 2009 16:42:48 -0000 Hi everyone, It seems like there are starting to be some drivers that need to allocate large chunks of DMA-able memory, and since bus_dmamem_alloc() on most architectures is always physically contiguous, it may not work for them. It seems like we could use the new sglist routines to help us here: --Define 2 new functions: int bus_dmamem_alloc_sglist(bus_dma_tag_t dmat, size_t size, struct sglist *sg, int flags, bus_dmamap_t *mapp) void bus_dmamem_free_sglist(bus_dma_tag_t dmat, struct sglist *sg, bus_dmamap_t map); --For sparc64 (or anywhere else we want to use an IOMMU): malloc() the buffer, feed it to sglist_build(), program the IOMMU to meet the constraints in dmat--Isn't this what we already do for sparc64, minus the sglist part? --For direct-mapped architectures: If the constraints in dmat are lenient enough, do malloc() and sglist_build(). Otherwise, do contigmalloc(M_NOWAIT) in a loop, in which we try to allocate as much of the buffer as possible. Anytime an allocation fails, we divide the allocation size by (roughly) 2 until the allocation succeeds, and continue allocating until either we've allocated enough space, or the allocation size drops below PAGE_SIZE, or we exceed dmat->maxsegs. --Some other things we'd need: --bus_dmamap_load_sglist()--I think jhb already did this as part of the sglist work, at least for amd64 --Structures in the busdma map to track allocated buffers so we could free them later. --Are there lower-level calls we could make to just allocate the physical pages instead of malloc()/contigmalloc()? The kva mapping for each allocated buffer segment isn't necessary. A lot of drivers would probably just want to mmap the sglist to userspace anyway. --Could we instead just integrate this multi-seg functionality into the default bus_dmamem_alloc()? We'd at least have to be able map the physical segments into a contiguous kva area--we wouldn't necessarily have to use an sglist in this case either. Let me know if this idea has any potential--if it does, I'd love to try implementing it:) --Jason