Date: Mon, 22 Jul 2002 21:25:37 -0700 (PDT) From: Matthew Dillon <dillon@apollo.backplane.com> To: Peter Jeremy <peter.jeremy@alcatel.com.au> Cc: Andreas Koch <koch@eis.cs.tu-bs.de>, freebsd-stable@FreeBSD.ORG Subject: Re: 4.6-RC: Glacial speed of dump backups Message-ID: <200207230425.g6N4PbuP057589@apollo.backplane.com> References: <20020606204948.GA4540@ultra4.eis.cs.tu-bs.de> <20020722081614.E367@gsmx07.alcatel.com.au> <20020722100408.GP26095@ultra4.eis.cs.tu-bs.de> <200207221943.g6MJhIBX054785@apollo.backplane.com> <20020723131318.F38313@gsmx07.alcatel.com.au>
next in thread | previous in thread | raw e-mail | index | archive | help
:...
:>
:> DUMP: finished in 140 seconds, throughput 6413 KBytes/sec (8 MB cache)
:> DUMP: finished in 144 seconds, throughput 6235 KBytes/sec (4 MB cache)
:> DUMP: finished in 234 seconds, throughput 3836 KBytes/sec (0 MB cache)
:
:Impressive. This is definitely much easier than trying to merge reads
:of adjacent blocks into one physical read to a scatter buffer. I'll
:do some experimenting when I have a chance.
:
:Two notes:
:1) I thought I'd tried something similar during my initial investigations
: (VLB 486 with IDE disks). I found the cost of reading the extra data
: ate most of the savings from sensible block ordering.
:2) At the time, someone (I didn't keep a record) commented that dump
: deliberately re-read inodes to try and minimise problems due to
: active filesystems.
:
:The first point probably isn't relevant any more.
:
:The second point means that your cache may make dumping active
Dump has always had problems dealilng with live filesystems, and its
even worse now that we can't dump via a buffered block device because
the filesystem state is going to be out of sync from the raw device
whether dump re-reads the inodes or not. So even though dump does try
to re-read inodes to check for changes, it is unlikely that our meager
cache will make things worse then the kernel's buffer cache already
makes them.
It's basically 'sync a lot then pray'.
My friend Dave (idiom.com) gave up using dump/restore for end-user
filesystems a long time ago (7 years) because of this issue.
:partitions more dangerous. (Since the cached data may not be relevant
:any longer). (Softupdate snapshots would help here, but they're not in
:-STABLE and I don't think Kirk's fixed a race-to-root deadlock yet).
:I suspect a cleaner (though much more effort) approach would be to
:make dump much chummier with the UFS code in the kernel so that it
:used the kernel FS buffer (with some hooks to prevent dump blowing
:the buffer cache for other processes).
:
:Peter
Yes, the snapshot code will make dump useful again :-) I dislike the
complexity of the snapshot code, though. I'd rather see a solution
at the raw device level that, say, copies the original data into swap
when new data overwrites it for the duration of the snapshot. Or
something like that. Since the snapshot code is not likely to ever
be backported to -stable it's worth thinking about some sort of solution
for -stable.
-Matt
Matthew Dillon
<dillon@backplane.com>
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200207230425.g6N4PbuP057589>
