Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 17 Nov 1997 15:08:54 -0500 (EST)
From:      "John S. Dyson" <toor@dyson.iquest.net>
To:        chuckr@glue.umd.edu
Cc:        npp@neg-micon.dk, current@FreeBSD.ORG
Subject:   Re: async fs?
Message-ID:  <199711172008.PAA04912@dyson.iquest.net>
In-Reply-To: <199711171723.MAA28442@earth.mat.net> from "chuckr@glue.umd.edu" at "Nov 17, 97 12:23:38 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
chuckr@glue.umd.edu said:
> On 17 Nov, Nicolai Petri wrote:
> > 
> > Greetings All,
> > 
> > I read something about mounting a drive in async mode, I wondered if this
> > will give me a performance increase on my proxy server.. And is it possible
> > to do it on all filesystems ??
> >
> 
> Are you aware that losing your system while you are running async is
> usually fairly safe (often you lose no files, or if you were very busy
> doing disk activity, maybe a few), but if you are mounted async, you
> could possibly lose much, much more?  It'll certainly increase
> performance, but you'd better be willing to pay the price.
>  
With an async mounted filesystem, you can loose the filesystem structure more
easily -- specifically, it is harder for fsck to correct the damage after a system
failure.  The way that I implemented write clustering, the async option does
not normally defer large sequential writes.  A write cluster of 64K will normally
be written immediately, similar to the normal mount case.  The overhead/cost of
having large numbers of deferred writes to files that are not temporary are (IMO)
worse than just doing them.  If the files are truly temporary, it is really best
to use MFS, and in the case of GCC, use -pipe when you can.

The async mounted filesystem mostly decreases the number of random seeks associated
with file data, and directory and other metadata write operations.

-- 
John
dyson@freebsd.org
jdyson@nc.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199711172008.PAA04912>