From owner-freebsd-fs@FreeBSD.ORG Fri Aug 7 20:46:07 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A9AD0106564A for ; Fri, 7 Aug 2009 20:46:07 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: from rv-out-0506.google.com (rv-out-0506.google.com [209.85.198.227]) by mx1.freebsd.org (Postfix) with ESMTP id 869D08FC1F for ; Fri, 7 Aug 2009 20:46:07 +0000 (UTC) Received: by rv-out-0506.google.com with SMTP id f9so492743rvb.43 for ; Fri, 07 Aug 2009 13:46:07 -0700 (PDT) Received: by 10.140.225.19 with SMTP id x19mr24282rvg.94.1249676624686; Fri, 07 Aug 2009 13:23:44 -0700 (PDT) Received: from mattintosh.spry.com (isaid.donotdelete.com [64.79.222.10]) by mx.google.com with ESMTPS id g31sm8714910rvb.46.2009.08.07.13.23.43 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 07 Aug 2009 13:23:43 -0700 (PDT) Message-Id: <04A4A2CB-828B-46BF-A2B6-50B64F06E96E@spry.com> From: Matt Simerson To: freebsd-fs@freebsd.org In-Reply-To: <8E9591D8BCB72D4C8DE0884D9A2932DC35BD34C3@ITS-HCWNEM03.ds.Vanderbilt.edu> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v936) Date: Fri, 7 Aug 2009 13:23:41 -0700 References: <8E9591D8BCB72D4C8DE0884D9A2932DC35BD34C3@ITS-HCWNEM03.ds.Vanderbilt.edu> X-Mailer: Apple Mail (2.936) Cc: "Hearn, Trevor" Subject: Re: UFS Filesystem issues, and the loss of my hair... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Aug 2009 20:46:07 -0000 On Aug 6, 2009, at 6:51 AM, Hearn, Trevor wrote: > First off, let me state that I love FreeBSD. I've used it for years, > and have not had any major problems with it... Until now. > > As you can tell, I work for a major university. I setup a large > storage array to hold data for a project they have here. No great > shakes, just some standard files and such. > > I'd buy a fella, or gal, a cup of coffee and a pop-tart if they > could help a brother out. I have checked out this link: > http://phaq.phunsites.net/2007/07/01/ufs_dirbad-panic-with-mangled-entries-in-ufs/ > and decided that I need to give this a shot after hours, but being > the kinda guy I am, I need to make sure I am covering all of my bases. > > Anyone got any ideas? > > Thanks! Have you given any consideration to ZFS? With ZFS there's no reason to have all those slices. Just stripe the two RAID 6 arrays together and have a single 26TB zpool. No GPT or UFS to mess with. Just point ZFS at the raw disks and off you go. I'm doing that with Areca 1231ML controllers in boxes with 24 disks each. The two 12 channel RAID cards each present a RAID volume to the OS and zpool stripes them together. One of the more useful features of ZFS is file system compression. You may find that with file system compression, you can get by with 13TB of storage. Then you have one RAID 6 array as the data store and the 2nd array for backups on each machine. With ZFS, you can send snapshots of the data partition to the backup every hour, or even every minute without any appreciable impact. back01# zfs get compression back01/var NAME PROPERTY VALUE SOURCE back01/var compression gzip local back01# zfs get compressratio back01/var NAME PROPERTY VALUE SOURCE back01/var compressratio 2.16x - I'm using gzip compression and I fit over twice as much data on the filesystem as I'd otherwise be getting. You can get more aggressive with gzip-9 if you need. You could use your backup server as a proof-of-concept. Install FreeBSD 8-BETA2 amd64 on it. Unmount the existing GPT partitions, wipe the MBR clean using dd, and create a zpool on just one of the RAID 6 volumes. Set ZFS compression=gzip on your filesystem and use rsync to copy all the files from your 'primary' server. I suspect you'll find that you have ample storage. Then you can create another zpool on that same box using the other RAID 6 volume for backups. You can experiment there with zfs send/receive, or rsnapshot, or whatever you use. Then get a subset of your users to start testing on it and see how it fares. I suspect you'll be quite pleased. If it works out wonderfully, you can rebuild the other GPT/UFS system on ZFS as well. Set it up with both RAID 6 volumes in one ZFS pool and start pushing your backups from the primary server to it. Once successfully backed up, you can add the 2nd RAID 6 volume on the primary server into the storage pool to double it's disk space. Matt