Date: Tue, 28 Mar 1995 08:58:34 -0800 From: pascal@netcom.com (Richard A Childers) To: freebsd-questions@FreeBSD.org, guido@IAEhv.nl Subject: Re: question about dump Message-ID: <199503281658.IAA03784@netcom19.netcom.com>
next in thread | raw e-mail | index | archive | help
" ... we get complaints from dump all the time, like: DUMP: read error from /dev/rsd1h: Invalid argument: [block -1444115938]: count=8192 DUMP: bread: lseek2 fails! DUMP: read error from /dev/rsd1h: Invalid argument: [sector -1444115938]: count=512 bread: lseek fails DUMP: DUMP: DUMP: bread: lseek fails read error from /dev/rsd1h: Invalid argument: [sector -126001564]: count=512 bread: lseek fails" These messages indicate that the device driver is trying to access a block that does not exist. 'bread' is 'block read', IE, 'go read a block of data from a physical device'. Based upon the bizzare numbers, I have to wonder if you have a very large disk-sized filesystem. The filesystem itself has been enhanced quite a bit over the decades since the Berkeley Fast File System was first introduced and is capable of, what, gigabyte sizes, with terabyte ranges just around the corner ( doubling the number of bits allocated to the inode, perhaps modification to incorporate Sun's vnode concept as well, although I'm not privy to these details ) but I'm not sure if dump(8) and restore(8) have received similar attention. << climbing up on soapbox >> Speaking as as a professional systems administrator, I'd like to point out that while there is a great deal of satisfaction in having a file- -system that spans an entire disk, there are some hidden 'gotchas'. Consider backups, to start. Such a filesystem runs the risk of occupying multiple tapes. The more tapes in the backup, the longer it takes to back up ... and the longer it takes to restore. In a production environment, where a network and multiple tape devices may be available, it is not un- -reasonable to suggest splitting the disk into several filesystems which could be restored in parallel - reducing downtime dramatically. Damage to the disk is another factor to consider. Let's pretend another large earthquake has occurred, and your hard drive got bumped and now has a few bad blocks. Those few bad blocks may be enough to render the entire filesystem inaccessible ... even though the rest of the disk is perfectly readable and writeable. If multiple filesystems were used instead the hit would be a smaller one, and, barring the destruction of a critical file- -system, once again, downtime is minimized. One last useful tip. Sizing partitions. Try for modular sizing, so that you can shuffle filesystems about when these things happen. Try splitting the disk into four equal filesystems, for instance. ( Obviously this doesn't apply to the drive containing the /root, swap, /usr filesystems. ) This is a win in a lot of ways ... it's easy for you to maintain, allows you to give separate development projects separate partitions so that they don't have to fight with each other for disk space, and spreads your eggs across several baskets. << climbing off soapbox >> A good way to see if it is dump(8), or the filesystem, is to try dd(1) : % /usr/bin/su - Password: <g0bbledyg00k> # dd if=/dev/rsd1h of=/dev/null ( If you want it to go faster use a bs=NNNb arguement to dd(1) based on the geometry of the disk, IE, something that evaluates to an entire track or something that evaluates to an entire cylinder. If none of this makes sense you probably don't want to try it until you've asked a few more questions. ) < kernel messages omitted > I suspect this is a symptom of the problem and not the actual cause. Note that they only occur when you are running dump(8) ( so far ). -- richard "Feminism is a multi-billion dollar industry." Christina Sommers, author richard childers san francisco, california pascal@netcom.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199503281658.IAA03784>