From owner-freebsd-hackers@FreeBSD.ORG Sun May 31 12:34:31 2009 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 66D60106566C for ; Sun, 31 May 2009 12:34:31 +0000 (UTC) (envelope-from kraduk@googlemail.com) Received: from mail-ew0-f212.google.com (mail-ew0-f212.google.com [209.85.219.212]) by mx1.freebsd.org (Postfix) with ESMTP id CC3858FC15 for ; Sun, 31 May 2009 12:34:30 +0000 (UTC) (envelope-from kraduk@googlemail.com) Received: by ewy8 with SMTP id 8so3783584ewy.43 for ; Sun, 31 May 2009 05:34:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:received:received:from:to:cc:references:subject :date:message-id:mime-version:content-type:content-transfer-encoding :x-mailer:in-reply-to:x-mimeole:thread-index; bh=GY/q0CXKC47ReG4WnFDg5wtsJ6xXxjY23tChBXuBkcw=; b=F7kHn6zXYmLCM0s3nryvF/nzqcE5l38xwrz3oH1IU5bJrzygy/RiAfX8EFbtaPP5jG bphB5cA1AnXFhWPHYVUY3JlfuH+Puvdh0nLRc4C0QuA2ehpq02gE3VLt5iXWRJ4WxNar tB2jBkGE1t8ULYIkXu9u/K3b8rkT2oQrpIe+k= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=from:to:cc:references:subject:date:message-id:mime-version :content-type:content-transfer-encoding:x-mailer:in-reply-to :x-mimeole:thread-index; b=UL68GGXgFVbx8qJJoT3nmXYLRf+c2cSDr5OB3SjBXXrhAhBD5GSyiZEIzlW+cjzN23 aNR2XRHgPRSz4kXO1IWYpdmcMUaWYhIgeGdqONX8PQHzhNFYKVFspx9E5IGBsYuzeQwG 0oYjnko+bQvpST82wrDWridtra5RUiHENXCZ8= Received: by 10.210.62.3 with SMTP id k3mr1653241eba.38.1243772007393; Sun, 31 May 2009 05:13:27 -0700 (PDT) Received: from LTPCSCOTT (e1-1.ns500-1.ts.milt.as9105.net [212.74.112.53]) by mx.google.com with ESMTPS id 4sm5176133ewy.8.2009.05.31.05.13.25 (version=SSLv3 cipher=RC4-MD5); Sun, 31 May 2009 05:13:26 -0700 (PDT) From: krad To: "'Mike Meyer'" , References: <20090530175239.GA25604@logik.internal.network><20090530144354.2255f722@bhuda.mired.org><20090530191840.GA68514@logik.internal.network> <20090530162744.5d77e9d1@bhuda.mired.org> Date: Sun, 31 May 2009 13:13:24 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook 11 In-Reply-To: <20090530162744.5d77e9d1@bhuda.mired.org> X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5579 Thread-Index: AcnhZXuyzxjaQcFyTa+/ZQw20y06LwAf36Ag X-Mailman-Approved-At: Sun, 31 May 2009 14:55:50 +0000 Cc: freebsd-hackers@freebsd.org Subject: RE: Request for opinions - gvinum or ccd? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 31 May 2009 12:34:31 -0000 Please don't whack gstripe and zfs together. It should work but is ugly and you might run into issues. Getting out of them will be harder than a pure zfs solution ZFS does support striping by default across vdevs Eg Zpool create data da1 Zpool add data da2 Would create a striped data set across da1 and da2 Zpool create data mirror da1 da2 Zpool add data mirror da3 da4 This would create a raid 10 across all drives Zpool create data raidz2 da1 da2 da3 da5 Zpool add data raidz2 da6 da7 da8 da9 Would create a raid 60 If you replace the add keyword with attach, mirroring is performed rather than striping Just for fun here is one of the configs off one of our sun x4500 at work, its opensolaris not freebsd, but it is zfs. One whoping big array of ~ 28 TB zpool create -O compression=lzjb -O atime=off data raidz2 c3t0d0 c4t0d0 c8t0d0 c10t0d0 c11t0d0 c3t1d0 c4t1d0 c8t1d0 c9t1d0 c10t1d0 c11t1d0 raidz2 c3t2d0 c4t2d0 c8t2d0 c9t2d0 c11t2d0 c3t3d0 c4t3d0 c8t3d0 c9t3d0 c10t3d0 c11t3d0 raidz2 c3t4d0 c4t4d0 c8t4d0 c10t4d0 c11t4d0 c3t5d0 c4t5d0 c8t5d0 c9t5d0 c10t5d0 c11t5d0 raidz2 c3t6d0 c4t6d0 c8t6d0 c9t6d0 c10t6d0 c11t6d0 c3t7d0 c4t7d0 c9t7d0 c10t7d0 c11t7d0 spare c10t2d0 c8t7d0 $ zpool status pool: archive-2 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: scrub completed after 11h9m with 0 errors on Sun May 31 01:09:22 2009 config: NAME STATE READ WRITE CKSUM archive-2 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c4t0d0 ONLINE 0 0 0 c8t0d0 ONLINE 0 0 0 c10t0d0 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c4t1d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c9t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c9t2d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 c9t3d0 ONLINE 0 0 0 c10t3d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c3t4d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 c8t4d0 ONLINE 0 0 0 c10t4d0 ONLINE 0 0 0 c11t4d0 ONLINE 0 0 0 c3t5d0 ONLINE 0 0 0 c4t5d0 ONLINE 0 0 0 c8t5d0 ONLINE 0 0 0 c9t5d0 ONLINE 0 0 0 c10t5d0 ONLINE 0 0 0 c11t5d0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c3t6d0 ONLINE 0 0 0 c4t6d0 ONLINE 0 0 0 c8t6d0 ONLINE 0 0 0 c9t6d0 ONLINE 0 0 0 c10t6d0 ONLINE 0 0 0 c11t6d0 ONLINE 0 0 0 c3t7d0 ONLINE 0 0 0 c4t7d0 ONLINE 0 0 0 c9t7d0 ONLINE 0 0 0 c10t7d0 ONLINE 0 0 0 c11t7d0 ONLINE 0 0 0 spares c10t2d0 AVAIL c8t7d0 AVAIL errors: No known data errors ZFS also check sums all data blocks written to the drive so data integrity is guaranteed. If you are paranoid you can also set it to keep multiple copies of each file. This will eat up loads of disk space so its best to use it sparingly one the most important stuff. You can only do it on a fs basis but this inst a big deal with zfs Zfs create data/important_stuff Zfs set copies=3 data/important_stuff You can also do compression as well, the big example above has this. In the near future encryption and deduping are also getting integrated into zfs. This is probably happening in the next few months on opensolaris, but if you want those features in freebsd I guess it will take at least 6 months after that. With regards to your backup I suggest you definitely look at doing regular fs snapshots. To be real safe, id install the tb drive (probably worth getting another as well as they are cheap) into another machine, and have it in another room, or building if possible. Replicate you data using incremental zfs sends, as this is the most efficient way. You can easily push it through ssh for security as well. Rsync will work fine but you will loose all you zfs fs settings with it as it works at the user level not the fs level. Hope this helps, im really looking forward to zfs maturing on bsd and having pure zfs systems 8) -----Original Message----- From: owner-freebsd-hackers@freebsd.org [mailto:owner-freebsd-hackers@freebsd.org] On Behalf Of Mike Meyer Sent: 30 May 2009 21:28 To: xorquewasp@googlemail.com Cc: freebsd-hackers@freebsd.org Subject: Re: Request for opinions - gvinum or ccd? On Sat, 30 May 2009 20:18:40 +0100 xorquewasp@googlemail.com wrote: > > If you're running a 7.X 64-bit system with a couple of GIG of ram, > > expect it to be in service for years without having to reformat the > > disks, and can afford another drive, I'd recommend going to raidz on a > > three-drive system. That will give you close to the size/performance > > of your RAID0 system, but let you lose a disk without losing data. The > > best you can do with zfs on two disks is a mirror, which means write > > throughput will suffer. > > Certainly a lot to think about. > > The system has 12gb currently, with room to upgrade. I currently have > two 500gb drives and one 1tb drive. I wanted the setup to be essentially > two drives striped, backed up onto one larger one nightly. I wanted the > large backup drive to be as "isolated" as possible, eg, in the event of > some catastrophic hardware failure, I can remove it and place it in > another machine without a lot of stressful configuration to recover the > data (not possible with a RAID configuration involving all three drives, > as far as I'm aware). The last bit is wrong. Moving a zfs pool between two systems is pretty straightforward. The configuration information is on the drives; you just do "zpool import " after plugging them in, and if the mount point exists, it'll mount it. If the system crashed with the zfs pool active, you might have to do -f to force an import. Geom is pretty much the same way, except you can configure it to not write the config data to disk, thus forcing you to do it manually (what you expect). I'm not sure geom is as smart if the drives change names, though. RAID support and volume management has come a long way from the days of ccd and vinum. zfs in particular is a major advance. If you aren't aware of it's advantages, take the time to read the zfs & zpool man pages, at the very least, before committing to geom (not that geom isn't pretty slick in and of itself, but zfs solves a more pressing problem). Hmm. Come to think of it, you ought to be able to use gstrip to stripe your disks, then put a zpool on that, which should get you the advantages of zfs with a striped disk. But that does seem odd to me. http://www.mired.org/consulting.html Independent Network/Unix/Perforce consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org _______________________________________________ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"