From owner-freebsd-performance@FreeBSD.ORG Wed May 6 20:51:09 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A378E1065688; Wed, 6 May 2009 20:51:09 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.29]) by mx1.freebsd.org (Postfix) with ESMTP id 4A3EA8FC0A; Wed, 6 May 2009 20:51:09 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by yx-out-2324.google.com with SMTP id 8so197724yxb.13 for ; Wed, 06 May 2009 13:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=/ea0+KbwWXYpS4GIsn6jOi79PDFqzj9iSMc256qlEMs=; b=mW5fJGiGyJ0OM7Q1bKrL7Fs0gndF4+Kk8DU8kgEdAomKE/q5GFiEiFLA5DL+thQ3tM CoLu1DmOzf4DJDqCcMJSp6INNdc7wxjBomRhzhX46cpFYikhq0xTEccxT/e6UX+I8c9Y eya+UijLW7UjIZqaxz3h2o0/kXQNMOiSqSNX4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=kXzadZ+i1ms53PZrF5hzp4vyQGgTbO2Nfj7+Ohr15zXMxeC0FXYxn5ZsV/a9KYR1MZ aNMPD02AxVNRDt+vb9R+3ubyQvVn2l5d1eiDe5fuf7WFgFFhcL2sy5YInQO8KAKWv0y5 JjWs69ClEPU8Rv8SchvWuPeK1g+p/BiZJBM4U= MIME-Version: 1.0 Received: by 10.150.201.17 with SMTP id y17mr3034097ybf.83.1241641836815; Wed, 06 May 2009 13:30:36 -0700 (PDT) In-Reply-To: <4A01E343.4020608@infracaninophile.co.uk> References: <70C0964126D66F458E688618E1CD008A0793EBD4@WADPEXV0.waddell.com> <4A01E343.4020608@infracaninophile.co.uk> Date: Wed, 6 May 2009 13:30:36 -0700 Message-ID: From: Freddie Cash To: freebsd-performance@freebsd.org, freebsd-questions@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: Subject: Re: filesystem: 12h to delete 32GB of data X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 May 2009 20:51:10 -0000 On Wed, May 6, 2009 at 12:21 PM, Matthew Seaman wrote: > Gary Gatten wrote: >> OT now, but in high i/o envs with high concurrency needs, RAID5 is >> still the way to go, esp if 90% of i/o is reads. Of course it depends >> on file size / type as well... Anyway, let's sum it up with "a >> storage subsystem is only as fast as its slowest link" > > It's not just the balance of reads over writes. =C2=A0It's the size and > sequential location of the IO requests. =C2=A0RAID5 is good for sequentia= l reads -- eg. > streaming a video -- where the system can read whole blocks from all the > drives involved, calculate parity over the whole lot and then push all th= at > blob of data up to the CPU. > > RAID5 is pretty pessimal if your usage pattern is small reads or writes > randomly scattered over your storage area -- eg. typical RDBMS behaviour > -- which works a great deal better on RAID10. > > I'd also contend that the essential difference between a really good fast > hardware raid controller and something disappointingly mundane is a decen= t > amount of non-volatile cache memory. =C2=A0For most H/W raid that equates= to > using a battery backup unit. =C2=A0I've been thinking though that a few G= B of > fast solid-state hard drive configured as a gjournal for a RAID10 (ie > gstripe +gmirror) might achieve the same effect for rather less outlay...= =C2=A0It > would probably not be too shabby with RAID5 even, but of course you'ld > lose the benefit of offloading parity calculations onto the RAID > controller's CPU. Still, modern multi-core CPUs are probably fast enough = nowadays to > make that viable for many purposes. Depending on the number of drives you are using, ZFS would also be worth looking at. The raidz implementation works quite nicely, and (in theory) doesn't suffer from the major issues that RAID5/6 does. It also does implicit striping across all vdevs, so you can make some very fancy RAID layouts (each vdev can be mirrored, raidz1, raidz2, or just a bunch of disks). I don't know if the version of ZFS in FreeBSD 7.x supports hybrid pools, but the version in FreeBSD 8.0 should, which lets you add SSDs to the pool to be used automatically as "cache" in-between RAM and harddrives. --=20 Freddie Cash fjwcash@gmail.com