Date: Thu, 19 Jul 2007 18:19:14 +0100 (BST) From: "Mark Powell" <M.S.Powell@salford.ac.uk> To: Pawel Jakub Dawidek <pjd@FreeBSD.org> Cc: freebsd-fs@freebsd.org Subject: Re: ZfS & GEOM with many odd drive sizes Message-ID: <20070719181313.G4923@rust.salford.ac.uk> In-Reply-To: <20070719135510.GE1194@garage.freebsd.pl> References: <20070719102302.R1534@rust.salford.ac.uk> <20070719135510.GE1194@garage.freebsd.pl>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 19 Jul 2007, Pawel Jakub Dawidek wrote: > On Thu, Jul 19, 2007 at 11:19:08AM +0100, Mark Powell wrote: >> What I want to know is, does the new volume have to be the same actual >> device name or can it be substituted with another? >> i.e. can I remove, for example, one of the 448GB gconcats e.g. gc1 and >> replace that with a new 750GB drive e.g. ad6? >> Eventually so that once all volumes are replaced the zpool could be, for >> example, 4x750GB or 2.25TB of usable storage. >> Many thanks for any advice on these matters which are new to me. > > All you described above should work. Thanks Pawel. For your response and much so for all your time spent working on ZFS. Should I expect much greater CPU usage with ZFS? I previously had a geom raid5 array which barely broke a sweat on benchmarks i.e simple large dd read and writes. With ZFS on the same hardware I notice 50-60% system CPU usage is usual during such tests. Before the network was a bottleneck, but now it's the zfs array. I expected it would have to do a bit more 'thinking', but is such a dramatic increase normal? Many thanks again. -- Mark Powell - UNIX System Administrator - The University of Salford Information Services Division, Clifford Whitworth Building, Salford University, Manchester, M5 4WT, UK. Tel: +44 161 295 4837 Fax: +44 161 295 5888 www.pgp.com for PGP key
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070719181313.G4923>