Date: Sat, 15 May 2010 17:13:51 -0700 From: Jeremy Chadwick <freebsd@jdc.parodius.com> To: Kaya Saman <SamanKaya@netscape.net> Cc: freebsd-fs@freebsd.org Subject: Re: Quick ZFS mirroring question for non-mirrored pool Message-ID: <20100516001351.GA50879@icarus.home.lan> In-Reply-To: <4BEF3137.4080203@netscape.net> References: <4BEF2F9C.7080409@netscape.net> <4BEF3137.4080203@netscape.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, May 16, 2010 at 02:41:43AM +0300, Kaya Saman wrote: > Ok I think I've got what I want by using the 'attach' command: > > from here: http://prefetch.net/blog/index.php/2007/01/04/adding-a-mirror-to-a-device-in-a-zfs-pool/ > > rd1# zpool attach zpool1 /mnt/disk1 /mnt/disk3 > rd1# zpool attach zpool1 /mnt/disk2 /mnt/disk4 > rd1# zpool status zpool1 > pool: zpool1 > state: ONLINE > scrub: resilver completed after 0h0m with 0 errors on Sun May 16 > 02:36:58 2010 > config: > > NAME STATE READ WRITE CKSUM > zpool1 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > /mnt/disk1 ONLINE 0 0 0 > /mnt/disk3 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > /mnt/disk2 ONLINE 0 0 0 96.5K resilvered > /mnt/disk4 ONLINE 0 0 0 15.3M resilvered What you have here is the equivalent of RAID-10. It might be more helpful to look at the above as a "stripe of mirrors". In this situation, you might be better off with raidz1 (RAID-5 in concept). You should get better actual I/O performance due to ZFS distributing the I/O workload across 4 disks rather than 2. At least that's how I understand it. > and also space is ok being ~256MB: > > rd1# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > zpool1 246M 32.2M 214M 13% ONLINE - > > although not sure where 10MB went as all files in this pool are > 128MB so I should get 256MB no?? I don't have this problem: testbox# zpool create mypool mirror da1 da2 mirror da3 da4 testbox# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mypool 254G 75K 254G 0% ONLINE - testbox# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 mirror ONLINE 0 0 0 da3 ONLINE 0 0 0 da4 ONLINE 0 0 0 errors: No known data errors And after creating a 32MByte file: testbox# dd if=/dev/urandom of=/mypool/file bs=1024 count=32768 32768+0 records in 32768+0 records out 33554432 bytes transferred in 1.522111 secs (22044669 bytes/sec) testbox# ls -l /mypool/file -rw-r--r-- 1 root wheel 33554432 May 15 17:12 /mypool/file testbox# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mypool 254G 32.1M 254G 0% ONLINE - -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100516001351.GA50879>