Date: Wed, 2 Aug 2000 20:53:27 +0000 (GMT) From: Terry Lambert <tlambert@primenet.com> To: zzhang@cs.binghamton.edu (Zhihui Zhang) Cc: tlambert@primenet.com (Terry Lambert), stevec@nbci.com (Steve Carlson), freebsd-fs@FreeBSD.ORG Subject: Re: FFS performance for large directories? Message-ID: <200008022053.NAA06373@usr06.primenet.com> In-Reply-To: <Pine.SOL.4.21.0007312045080.2183-100000@sol.cs.binghamton.edu> from "Zhihui Zhang" at Jul 31, 2000 08:46:59 PM
next in thread | previous in thread | raw e-mail | index | archive | help
> > This is because the tarball is packed up in the wrong order; > > change the packing order (breadth-first vs. depth-first), > > and the "ports problem" goes away. I have done this with the > > -T option to tar, and it works fine, so long as you have an > > accurate file. This ensures that there is no cache-busting > > on the dearchive, which is the source of the problem. > > Good point. But what do you mean by saying "have an accurate file"? If you use a naked tar without -T, then it traverses the directory, depth-first. This means that it accuractely gets all of the files that are there and packs them up. If you drive it using -T, then it depends on the contents of the list-of-files file that you tell it to use. This means that you need to make sure the list-of-files file is kept up to date, or you will miss some files and/or patches and/or new ports. It's like depending on MAKEDEV instead of devfs to keep the devices up to date: if you have stale data, then you end up with a bad result that's worse than waiting around for the cache-busting creates. Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-fs" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200008022053.NAA06373>