Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 27 Dec 2011 18:57:56 +0200
From:      Kostik Belousov <kostikbel@gmail.com>
To:        Attilio Rao <attilio@freebsd.org>
Cc:        freebsd-hackers@freebsd.org, Giovanni Trematerra <giovanni.trematerra@gmail.com>, Venkatesh Srinivas <vsrinivas@dragonflybsd.org>
Subject:   Re: Per-mount syncer threads and fanout for pagedaemon cleaning
Message-ID:  <20111227165755.GT50300@deviant.kiev.zoral.com.ua>
In-Reply-To: <CAJ-FndDY-40xqVBTS5wSyrw3cxbG=hTjQ=et-nBtkSnesxrgZQ@mail.gmail.com>
References:  <20111226202414.GA18713@centaur.acm.jhu.edu> <CACfq090S=U-_3QA1XLNX31SD2zgAcnmG9kJrXYCvhR9Q-2JfKA@mail.gmail.com> <CAJ-FndDY-40xqVBTS5wSyrw3cxbG=hTjQ=et-nBtkSnesxrgZQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--5+cpaZGbq3HGGD8f
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 27, 2011 at 05:05:04PM +0100, Attilio Rao wrote:
> 2011/12/27 Giovanni Trematerra <giovanni.trematerra@gmail.com>:
> > On Mon, Dec 26, 2011 at 9:24 PM, Venkatesh Srinivas
> > <vsrinivas@dragonflybsd.org> wrote:
> >> Hi!
> >>
> >> I've been playing with two things in DragonFly that might be of intere=
st
> >> here.
> >>
> >> Thing #1 :=3D
> >>
> >> First, per-mountpoint syncer threads. Currently there is a single thre=
ad,
> >> 'syncer', which periodically calls fsync() on dirty vnodes from every =
mount,
> >> along with calling vfs_sync() on each filesystem itself (via syncer vn=
odes).
> >>
> >> My patch modifies this to create syncer threads for mounts that reques=
t it.
> >> For these mounts, vnodes are synced from their mount-specific thread r=
ather
> >> than the global syncer.
> >>
> >> The idea is that periodic fsync/sync operations from one filesystem sh=
ould
> >> not
> >> stall or delay synchronization for other ones.
> >> The patch was fairly simple:
> >> http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/50e4012a4b55e1=
efc595db0db397b4365f08b640
> >>
> >
> > There's something WIP by attilio@ on that area.
> > you might want to take a look at
> > http://people.freebsd.org/~attilio/syncer_alpha_15.diff
> >
> > I don't know what hammerfs needs but UFS/FFS and buffer cache make a go=
od
> > job performance-wise and so the authors are skeptical about the boost t=
hat such
> > a change can give. We believe that brain cycles need to be spent on
> > other pieces of the system such as ARC and ZFS.
>=20
> More specifically, it is likely that focusing on UFS and buffer cache
> for performance is not really useful, we should drive our efforts over
> ARC and ZFS.
> Also, the real bottlenecks in our I/O paths are in GEOM
> single-threaded design, lack of unmapped I/O functionality, possibly
> lack of proritized I/O, etc.
Even if not useful for performance (this is possible), the change itself
is useful to provide better system behaviour in the case of failure.
E.g., slowly-responding or wedged NFS server, dying disk etc would
more limited impact with the patch then without it. It will not completely
solve the issue, since e.g. dirty buffers amount is not limited per-mount
point, only globally. But at least it covers significant part of the
problem.

Also, it should help with interactivity and load pikes at 30sec interval.

I remember that I had no major objections when I read the patch. I personal=
ly
would prefer to have it committed.

--5+cpaZGbq3HGGD8f
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)

iEYEARECAAYFAk75+RMACgkQC3+MBN1Mb4jJzACg0h7Lp23ry6srdSQa0bgVRc7V
PVgAn0WKCAn9kNrdPmkqlEtn/iK7u26U
=ZvUp
-----END PGP SIGNATURE-----

--5+cpaZGbq3HGGD8f--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20111227165755.GT50300>