From owner-freebsd-hackers Tue Nov 16 10:19:27 1999 Delivered-To: freebsd-hackers@freebsd.org Received: from bingnet2.cc.binghamton.edu (bingnet2.cc.binghamton.edu [128.226.1.18]) by hub.freebsd.org (Postfix) with ESMTP id 6A3981533F; Tue, 16 Nov 1999 10:19:11 -0800 (PST) (envelope-from zzhang@cs.binghamton.edu) Received: from sol.cs.binghamton.edu (cs1-gw.cs.binghamton.edu [128.226.171.72]) by bingnet2.cc.binghamton.edu (8.9.3/8.9.3) with SMTP id NAA00441; Tue, 16 Nov 1999 13:19:04 -0500 (EST) Date: Tue, 16 Nov 1999 12:06:37 -0500 (EST) From: Zhihui Zhang Reply-To: Zhihui Zhang To: freebsd-hackers@freebsd.org, freebsd-fs@freebsd.org Subject: On-the-fly defragmentation of FFS Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG After studying the code of ffs_reallocblks() for a while, it occurs to me that the on-the-fly defragmentation of a FFS file (It does this on a per file basis) only takes place at the end of a file and only when the previous logical blocks have all been laid out contiguously on the disk (see also cluster_write()). This seems to me a lot of limitations to the FFS defragger. I wonder if the file was not allocated contiguously when it was first created, how can it find contiguous space later unless we delete a lot of files in between? I hope someone can confirm or correct my understanding. It would be even better if someone can suggest a way to improve defragmentation if the FFS defragger is not very efficient. BTW, if I copy all files from a filesystem to a new filesystem, will the files be stored more contiguously? Why? Any help or suggestion is appreciated. -Zhihui To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message