Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 30 May 2011 15:34:37 -0400 (EDT)
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Robert Watson <rwatson@FreeBSD.org>
Cc:        Kostik Belousov <kostikbel@gmail.com>, Rick Macklem <rmacklem@freebsd.org>, svn-src-all@freebsd.org, src-committers@freebsd.org, svn-src-head@freebsd.org
Subject:   Re: svn commit: r222466 - head/sbin/umount
Message-ID:  <1822018328.1016204.1306784077970.JavaMail.root@erie.cs.uoguelph.ca>
In-Reply-To: <alpine.BSF.2.00.1105301923140.1535@fledge.watson.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> If it masks, for example, lateny for a synchronous RPC to the remote
> mountd to
> deregister the mountpoint, allowing a cache flush and unmount to take
> place
> concurrently, that might be a useful benefit. I'm not sure I see any
> evidence
> that is the case in the source code, however.
> 
Well, I suppose write latency will often be higher for NFS than other
mount points.
The case where I think the sync(2) might help from a performance point of
view might be (I'm talking through my hat here. Haven't benchmarked or looked
at it closely and don't intend to:-):
- does a "umount -a" when there are a lot of /etc/fstab entries with a lot of
  dirty blocks to be written out.
  --> the writes would be started for all file systems and then they would be
      unmount(2)'d one at a time. It might be quite a while before the last fs
      gets an unmount(2), so the blocks "might" have been written out by then.

on the other hand, take the above example and replace "umount -a" with "umount /x"
where "/x" is a single fs with no dirty blocks and...
--> for this one, the sync(2) would just slow it down, by going through all the
    other fs's...

rick



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1822018328.1016204.1306784077970.JavaMail.root>