Skip site navigation (1)Skip section navigation (2)
Date:      01 Nov 1999 20:13:04 +0000
From:      Randell Jesup <rjesup@wgate.com>
To:        freebsd-fs@FreeBSD.ORG
Subject:   Re: journaling UFS and LFS
Message-ID:  <ybu4sf6gecv.fsf@jesup.eng.tvol.net.jesup.eng.tvol.net>
In-Reply-To: Terry Lambert's message of "Mon, 1 Nov 1999 21:51:44 %2B0000 (GMT)"
References:  <199911012151.OAA05179@usr02.primenet.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Terry Lambert <tlambert@primenet.com> writes:
>Another thing that could mitigate this, at least on relatively
>quiescent systems (e.g. it'd work for power failures in the
>middle of the night, but wouldn't work for systems with disk
>writes going on) would be "soft read-only".  This would flush
>all writes, and then if no new writes came in for "a while",
>you would set a flag on the in code FS structure that you were
>marking it "soft read-only", and then write out the superblock
>marking it clean.  Subsequent writes would be permitted, but
>only when the "soft read-only" bit was cleared, after remarking
>the super block dirty again.

	This scheme was used for the Amiga FS's - in fact it was critical
for them, since there was no explicit 'shutdown' command.  The root block
(equivalent to superblock) would be marked dirty (and flushed to disk) if
metadata (including file sizes) changed, and if there was no write activity
for a second or two it would be flushed and the root block would be written
with a clean flag.  (This is a simplification, of course.)

	On a single-user system, the disks are often (usually) quiescent
and thus would be marked clean (even during use - mine's totally quiet
right now).  On busier systems or under load the superblock would rarely be
left in the clean state, however.  Also, because of write ordering and the
way files were created, during validation (aka fsck) the disk was readable;
in some instances if there were corruption a file or directory might not be
accessible, and an error would be returned (of course, the validation
process would normally fix said error when it got to it).  If something
tried to write to an unvalidated drive, the filesystem would return an
error, and the Write()/Create()/Delete()/etc OS code would put up an
error/retry requester, which would automatically go away (and retry) once
the drive validated.  Validation was also quite fast by fsck standards.
Not all problems could be solved by the built-in validator; disk-recovery
tools could attempt to fix even very seriously hosed disks.  Since the disk
was usually mostly readable even with an uncorrectable error, often the
disk recovery program could be run from the bad partition itself if need
be.

	Of course, this is mostly of historical interest at this point, but
some of the ideas used in it show up moderately often (witness msg above).

-- 
Randell Jesup, Worldgate Communications, ex-Scala, ex-Amiga OS team ('88-94)
rjesup@wgate.com



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ybu4sf6gecv.fsf>