Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Jan 1997 22:04:36 +1100 (EDT)
From:      Darren Reed <avalon@coombs.anu.edu.au>
To:        lada@ws2301.gud.siemens.co.at (Hr.Ladavac)
Cc:        avalon@coombs.anu.edu.au, terry@lambert.org, stesin@gu.net, karpen@ocean.campus.luth.se, hackers@FreeBSD.org
Subject:   Re: truss, trace ??
Message-ID:  <199701141108.DAA06408@freefall.freebsd.org>
In-Reply-To: <199701141034.AA263998091@ws2301.gud.siemens.co.at> from "Hr.Ladavac" at Jan 14, 97 11:34:50 am

next in thread | previous in thread | raw e-mail | index | archive | help
In some mail from Hr.Ladavac, sie said:
> 
> E-mail message from Darren Reed contained:
> > 
> > The way I see it, there some things to consider which you may (or may
> > not) want to `work' with cyclic files:
> > 
> > * offset - when you pass byte n of an n byte cylic file, should lseek tell
> >            you that you're at byte n+1 or 0 ?
> > 
> >            Does it make sense to return n+1 if it can't lseek to that
> >            absolute position ?  Would lseek() be hacked to goto position
> >            x as x % n ?
> > 
> > 
> > * blocks - why do you need to shuffle blocks around ?  Why not just just
> >            the offset pointer once you get to the end ?  (In effect, the
> >            write is done in 2 parts: first to the end of the file, the
> >            second from the start).

Hmmm, I can see that rotating the block list would be necessary if the
there is open-write-close behaviour.

> > * readers - if a reader is open and at position y and the next write will
> >            go from x to x+n whre x+n > y does the writer block ? (Consider
> >            that all data from y around to x is valid).
> 
> There is a rather simple way to satisfy most of these semantic requirements:
> replace the leading blocks with holes--the file grows in the length, lseek
> works as expected, but write is only guaranteed to succeed if it
> fails in the last part of the file, and the filesystem occupancy does not
> increase.  read succeeds always, but sometimes it returns a buffer full of 
> (leading) zeros.

But then stat(2) lies about the real file size - I'd call that a bug.

I'd also call the behaviour of read in this instance buggy too, a read
on a cyclic buffer should never return 0's in the buffer (unless they were
written as 0's initially).

Whilst these requirements seemingly solve the problems, they do not lead
to a very good implementation of cyclic files.

> > I guess you're thinking of what happens when you keep appending to a file
> > ...(open - write - close.  I donm't  see that non-block sized record
> > files can exist as cyclic files properly under Unix, eg:
> > 
> > I have a 30,000 byte cyclic file.  I write 1 byte to it, making 30,001.
> > This isn't enough to delete the first block, but you must append it.
> > (hmmm, would this mean the first block would be a fragment - would it even
> > work ?)
> 
> Don't see a problem here; just blast away the leading blocks--unallocate them.
> The last block can easily be a fragment.

BUT, adding one byte means only the 1st byte should be deleted.  That isn't
an entire block.  So you want the first block to be 511 bytes and the last
to be 1 byte long.  The last block isn't a problem, but what about the
first ?

(By adding 1 byte, I mean open(O_APPEND), write(1 byte), close())

Hmmm, will this sort of thing lead to an increasing large number of
fragments ?

Darren



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199701141108.DAA06408>