Date: Thu, 8 Sep 2005 11:36:43 +0300 (EEST) From: Dmitry Pryanishnikov <dmitry@atlantis.dp.ua> To: Mike Silbersack <silby@silby.com> Cc: cvs-src@FreeBSD.org, src-committers@FreeBSD.org, cvs-all@FreeBSD.org Subject: Re: cvs commit: src/sys/fs/msdosfs msdosfs_denode.c Message-ID: <20050908112746.K43691@atlantis.atlantis.dp.ua> In-Reply-To: <20050908024022.G28140@odysseus.silby.com> References: <20050908094705.R19771@atlantis.atlantis.dp.ua> <20050908024022.G28140@odysseus.silby.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 8 Sep 2005, Mike Silbersack wrote: >> entries begin at byte offsets from the start of the media with identical >> low-order 32 bits; e.g., 64-bit offsets >> >> 0x0000000000001000 and >> 0x0000000100001000 > > Hm, maybe it wouldn't be too difficult to create, then. There is an option > to have compressed filesystems, so if one wrote a huge filesystem with files > that all contained zeros, perhaps it would compress well enough. BTW, how can one work with compressed filesystem? > > If you just started creating a lot of equally sized files containing zero as > their content, maybe it could be done via a script. Yeah, you could just > call truncate in some sort of shell script loop until you have enough files, > then go back and try reading file "000001", and that should cause the panic, > right? Our task is slightly different: not our files should start at magic offset, but their _directory entries_. I think this task is achievable by creating new FAT32 filesystem, then (in strict order) a directory, a large (approx. 4Gb) file in it, a second directory, a file in it, then lookup first file. In order to get a panic whe just have to tune size of the large file. If I have enough time I'll try to prepare such a regression test. Sincerely, Dmitry -- Atlantis ISP, System Administrator e-mail: dmitry@atlantis.dp.ua nic-hdl: LYNX-RIPE
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050908112746.K43691>