From owner-freebsd-fs@FreeBSD.ORG Thu Jul 19 20:07:07 2007 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BD45416A400 for ; Thu, 19 Jul 2007 20:07:07 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from itchy.rabson.org (mailgate.nlsystems.com [80.177.232.242]) by mx1.freebsd.org (Postfix) with ESMTP id 45C1C13C4A6 for ; Thu, 19 Jul 2007 20:07:07 +0000 (UTC) (envelope-from dfr@rabson.org) Received: from herring.rabson.org (herring.rabson.org [80.177.232.250]) by itchy.rabson.org (8.13.3/8.13.3) with ESMTP id l6JJRiD3053701; Thu, 19 Jul 2007 20:27:45 +0100 (BST) (envelope-from dfr@rabson.org) From: Doug Rabson To: freebsd-fs@freebsd.org Date: Thu, 19 Jul 2007 20:27:43 +0100 User-Agent: KMail/1.9.6 References: <20070719102302.R1534@rust.salford.ac.uk> <20070719135510.GE1194@garage.freebsd.pl> <20070719181313.G4923@rust.salford.ac.uk> In-Reply-To: <20070719181313.G4923@rust.salford.ac.uk> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200707192027.44025.dfr@rabson.org> X-Virus-Scanned: ClamAV 0.87.1/3700/Thu Jul 19 14:13:47 2007 on itchy.rabson.org X-Virus-Status: Clean Cc: Pawel Jakub Dawidek , Mark Powell Subject: Re: ZfS & GEOM with many odd drive sizes X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jul 2007 20:07:07 -0000 On Thursday 19 July 2007, Mark Powell wrote: > On Thu, 19 Jul 2007, Pawel Jakub Dawidek wrote: > > On Thu, Jul 19, 2007 at 11:19:08AM +0100, Mark Powell wrote: > >> What I want to know is, does the new volume have to be the same > >> actual device name or can it be substituted with another? > >> i.e. can I remove, for example, one of the 448GB gconcats e.g. > >> gc1 and replace that with a new 750GB drive e.g. ad6? > >> Eventually so that once all volumes are replaced the zpool could > >> be, for example, 4x750GB or 2.25TB of usable storage. > >> Many thanks for any advice on these matters which are new to me. > > > > All you described above should work. > > Thanks Pawel. For your response and much so for all your time spent > working on ZFS. > > Should I expect much greater CPU usage with ZFS? > I previously had a geom raid5 array which barely broke a sweat on > benchmarks i.e simple large dd read and writes. With ZFS on the same > hardware I notice 50-60% system CPU usage is usual during such tests. > Before the network was a bottleneck, but now it's the zfs array. I > expected it would have to do a bit more 'thinking', but is such a > dramatic increase normal? > > Many thanks again. ZFS does a checksum on every block it reads from the disk which may be your problem. In normal usage, this isn't a big deal due because many reads get data from the cache.