From owner-freebsd-current@FreeBSD.ORG Mon Mar 31 21:28:21 2008 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E204F106564A; Mon, 31 Mar 2008 21:28:21 +0000 (UTC) (envelope-from scottl@samsco.org) Received: from pooker.samsco.org (pooker.samsco.org [168.103.85.57]) by mx1.freebsd.org (Postfix) with ESMTP id 8EEAE8FC1F; Mon, 31 Mar 2008 21:28:21 +0000 (UTC) (envelope-from scottl@samsco.org) Received: from phobos.samsco.home (phobos.samsco.home [192.168.254.11]) (authenticated bits=0) by pooker.samsco.org (8.13.8/8.13.8) with ESMTP id m2VLSIOq065038; Mon, 31 Mar 2008 15:28:18 -0600 (MDT) (envelope-from scottl@samsco.org) Message-ID: <47F15772.5010104@samsco.org> Date: Mon, 31 Mar 2008 15:28:18 -0600 From: Scott Long User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.11) Gecko/20071128 SeaMonkey/1.1.7 MIME-Version: 1.0 To: Ivan Voras References: <47F147D8.3030905@samsco.org> <9bbcef730803311409ha25effam9dd522c9084783ad@mail.gmail.com> In-Reply-To: <9bbcef730803311409ha25effam9dd522c9084783ad@mail.gmail.com> X-Enigmail-Version: 0.95.6 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.4 required=5.4 tests=ALL_TRUSTED autolearn=failed version=3.1.8 X-Spam-Checker-Version: SpamAssassin 3.1.8 (2007-02-13) on pooker.samsco.org Cc: freebsd-current@freebsd.org Subject: Re: Are large RAID stripe sizes useful with FreeBSD? X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2008 21:28:22 -0000 Ivan Voras wrote: > On 31/03/2008, Scott Long wrote: >> Ivan Voras wrote: >> > Most of new hardware RAID controllers offer stripe sizes of 128K, 256K >> > and some also have 512K and 1M stripes. In the simplest case of RAID0 of >> > two drives, knowing that the data is striped across the drives and that >> > FreeBSD issues IO request of at most 64K, is it useful to set stripe >> > sizes to anything larger than 32K? I suppose something like TCQ would >> > help the situation but does anyone know how is this situation usually >> > handled on the RAID controllers? >> >> Large I/O sizes and large stripe sizes only benefit benchmarks and a >> narrow class of real-world applications. > > Like file servers on gigabit networks serving large files? :) > >> Large stripes have the >> potential to actually hurt RAID-5 performance since they make it >> much harder for the card to a full stripe replacement instead of a >> read-modify-xor-write. > > This is logical. > >> I hate to be all preachy and linux-like and tell you want you need or >> don't need, but in all honesty, large i/o's and stripes usually >> don't help typical filesystem-based mail/squid/mysql/apache server >> apps. I do have proof-of-concept patches to allow larger I/O's for >> selected controllers on 64-bit FreeBSD platforms, and I intend to clean >> up and commit those patches in the next few weeks (no, I'm not ready for >> nor looking for testers at this time, sorry). > > I'm not (currently) nagging for large IO request patches :) I just > want to understand what is happening currently if the stripe size is > 256 kB (which is the default at least on IBM ServeRAID 8k, and I think > recent CISS controllers have 128 kB), and the OS chops out IO in 64k > blocks. I have compared Linux performance and FreeBSD performance and > I can't conclude from that - for FreeBSD it's not like all requests > (e.g. 4 64 kB requests) go to a single drive at a time, and it's not > like they always get split. In FreeBSD, the request has the possibility of getting split up twice, once in GEOM, and once in the block layer above GEOM. In both cases, the split requests will get put onto the g_down queue in series as they are created, and the g_down thread will then pop them off the queue and send them to the driver in series. There is no waiting in-between for the first part of the request to complete before the second part of the request will be sent down. For writes, the performance penalty of smaller I/O's (assuming no RAID-5 effects) is minimal; most caching controllers and drives will batch the concurrent requests together, so the only loss is in the slight overhead of the extra transaction setup and completion. For reads, the penalty can be greater because the controller/disk will try to execute the first request immediately and not wait for the second part to be requested, leading to the potential for extra rotational and head movement delays. Many caching RAID controllers offer a read-ahead feature to counteract this. However, while my testing has shown little measurable benefit to this, YMMV. Scott