Date: Sun, 2 Apr 1995 08:35:46 +1000 From: Bruce Evans <bde@zeta.org.au> To: davidg@Root.COM, terry@cs.weber.edu Cc: bde@zeta.org.au, bugs@ns1.win.net, gary@palmer.demon.co.uk, hackers@FreeBSD.org, jkh@freefall.cdrom.com, tom@haven.uniserve.com Subject: Re: 4 gig st15150n disk setups Message-ID: <199504012235.IAA21888@godzilla.zeta.org.au>
next in thread | raw e-mail | index | archive | help
>> Just to clarify what Bruce is saying: If someone were to create a file that >> was >2GB, BAD things would happen. The system currently considers any blocks >> >2GB and <4GB as file metadata (for containing indirect blocks). Not only >> would this certainly cause the machine to panic, it would almost certainly >> cause random filesystem corruption. >> I'll try to fix as many of these potential problems as possible before the >> release. >I was under the impression that these were atomic block offsets -- NOT >byte offsets. David might have it slightly wrong above. I'm not familiar with the code that handles negative (block?) numbers in metadata. The clustering code converts block numbers to byte offsets for some reason, perhaps just because it wants to compare the byte offset with the file size and multiplying by the block size is sometimes much more efficient than dividing the file size by the block size and worrying about rounding. The multiplications are done as `blkno * size' where `blkno' is usually of type daddr_t and `size' is usually of type long. They should be done as `(off_t)blkno * size'. This is probably easy to fix - there don't seem to be many secondary problems. I gave up on the problem for a while because it seemed that there were more fundamental problems in the vm system. Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199504012235.IAA21888>