From owner-freebsd-hackers@FreeBSD.ORG Sun Oct 2 14:38:11 2011 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DCCFF1065670 for ; Sun, 2 Oct 2011 14:38:11 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 0F2338FC08 for ; Sun, 2 Oct 2011 14:38:08 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:906c:6af3:5301:18c6]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPA id 455684AC2D; Sun, 2 Oct 2011 18:38:05 +0400 (MSD) Date: Sun, 2 Oct 2011 18:37:58 +0400 From: Lev Serebryakov Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <16010671866.20111002183758@serebryakov.spb.ru> To: Davide Italiano In-Reply-To: References: <358651269.20111002162109@serebryakov.spb.ru> <1393358703.20111002174545@serebryakov.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-hackers@freebsd.org Subject: Re: Memory allocation in kernel -- what to use in which situation? What is the best for page-sized allocations? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 02 Oct 2011 14:38:11 -0000 Hello, Davide. You wrote 2 =D0=BE=D0=BA=D1=82=D1=8F=D0=B1=D1=80=D1=8F 2011 =D0=B3., 18:00:= 26: >> =C2=A0BTW, I/O is often require big buffers, up to MAXPHYS (128KiB for >> =C2=A0now), do you mean, that any allocation of such memory has >> =C2=A0considerable performance penalties, especially on multi-core and >> =C2=A0multi-CPU systems? >> > In fact, the main client of such kind of allocations is the ZFS > filesystem (this is due to its mechanism of adaptative cache > replacement, ARC). Afaik, at the time in which UMA was written, such > kind of allocations you describe were so infrequent that no initial > effort was made in order to optimize them. > People tried to address this issue by having ZFS create a large number > of UMA zones for large allocations of different sizes. Unfortunately, > one of the side-effects of this approach was the growth of the > fragmentation, so we're investigating about. What about these geom modules, which allocate buffers, because need to read more, than requested by upper layer? geom_cache and geom_raid3, for example? And "my" geom_raid5 -- I begin to understand, why original author of geom_raid5 (which need MAXPHYS-sized buffers regularry) wrote its own memory management layer... --=20 // Black Lion AKA Lev Serebryakov