From owner-freebsd-hackers@FreeBSD.ORG Tue Oct 16 15:05:50 2007 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9EFD816A419 for ; Tue, 16 Oct 2007 15:05:50 +0000 (UTC) (envelope-from jdc@parodius.com) Received: from mx01.sc1.parodius.com (mx01.sc1.parodius.com [72.20.106.3]) by mx1.freebsd.org (Postfix) with ESMTP id 98E4613C4B0 for ; Tue, 16 Oct 2007 15:05:50 +0000 (UTC) (envelope-from jdc@parodius.com) Received: by mx01.sc1.parodius.com (Postfix, from userid 1000) id 2DC021CC030; Tue, 16 Oct 2007 08:05:50 -0700 (PDT) Date: Tue, 16 Oct 2007 08:05:50 -0700 From: Jeremy Chadwick To: Simun Mikecin Message-ID: <20071016150550.GA40548@eos.sc1.parodius.com> Mail-Followup-To: Simun Mikecin , freebsd-hackers@freebsd.org References: <5949.42099.qm@web36608.mail.mud.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5949.42099.qm@web36608.mail.mud.yahoo.com> User-Agent: Mutt/1.5.16 (2007-06-09) Cc: freebsd-hackers@freebsd.org Subject: Re: Filesystem snapshots dog slow X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Oct 2007 15:05:50 -0000 On Tue, Oct 16, 2007 at 06:44:36AM -0700, Simun Mikecin wrote: > Not using a snapshot for dump may produce inconsistent dump image if there was writing during > dumping. Maybe it should say something like "should use -L when dumping live read-write > filesystems for the result to be consistent (at the cost of speed)!". But that is too long :( I thought that the only way you could get an inconsistent filesystem dump was if you used the -C (caching) option in dump? I tried removing -L from my (home) backups, and I found that it made a world of difference (speed-wise) on all of my filesystems except for /storage (see thread). Without -L, dump estimated 2 hours 10 minutes; with -L, it estimated 50 minutes. This is a little odd (the inconsistencies in speed/time), but I can't argue with the results. The fact of the matter is, people always want a consistent (that is to say, working) backup. So if using -L is the proper solution regardless of the UFS2 drawbacks, then I'll have to live with that. Or go to ZFS -- more on that below. > One of great things about ZFS is that you can forget about things like gstripe(8) or dump(8). You > only need two commands: zpool and zfs. ZFS is not just a filesystem, it's also a logical volume > management tool. > > ZFS on FreeBSD is considered experimental since it is very new. But from experience so far with > it, only a few glitches do still exist: > 1) root on ZFS is possible, but it could give you more problems then it solves (for now, it's best > to have a small, say 512MB root filesystem running UFS, but everything else on ZFS). > 2) using a zvol on ZFS for swap can cause a panic > 3) using ZFS on FreeBSD/i386 can cause a panic (I suggest using UFS+gjournal instead of ZFS on > FreeBSD/i386) In regards to the items you list: 1) doesn't apply (I have no issues with using UFS or UFS on rootfs), 2) not familiar with this feature of ZFS, and 3) would definitely impact us, as we use i386 exclusively. Now, that said... There's a current thread discussing system panics (on i386 and amd64!) with ZFS (re: "ZFS kmem_map too small"). This is one of the threads I'm referring to. > Personally, I would choose ZFS on FreeBSD/amd64 production machine. None of the systems here contain more than 2GB of RAM, and generally-speaking won't benefit from any of the bonuses amd64 offers at this time. There's other scenarios here that don't permit me to run amd64 on our production boxes (has nothing to do with FreeBSD), so for now, we stick with i386. On the bright side, we do now have a machine running RELENG_7 (I installed the box this weekend), but the two requirements for this box are 1) that the machine remain up and responsive as close to 24x7 as possible (e.g. stalling disk I/O like dump -L on large UFS2 filesystems isn't acceptable), 2) remains stable, and 3) runs i386 (developer is not familiar with 64-bit environments). I'll have to discuss with the developer if he feels comfortable with ZFS there. On my home machine, I'm more than willing to run amd64 -- and I have in the past (but went back to i386 because I did not feel comfortable with things like /usr/lib32; discuss off-list if interested) -- but my requirements are a bit different. In the case of my home machine, I spent a little time this morning migrating it from UFS2/gstripe (the /storage filesystem consists of two SATA300 disks in a RAID-0 array -- yes, you read that right, hence the nightly backups!) to a ZFS storage pool (zpool). Filesystem 1024-blocks Used Avail Capacity Mounted on storage 957873408 94645376 863228032 10% /storage So far so good; and I wanted to try scrubbing, just for fun... icarus# zpool status pool: storage state: ONLINE scrub: scrub in progress, 49.74% done, 0h6m to go config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 ad4 ONLINE 0 0 0 ad6 ONLINE 0 0 0 errors: No known data errors I'll have to see how ZFS snapshots work out. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |