Date: Thu, 14 Jun 2012 15:24:56 +0300 From: Konstantin Belousov <kostikbel@gmail.com> To: Rick Macklem <rmacklem@uoguelph.ca> Cc: freebsd-fs@freebsd.org, Pavlo <devgs@ukr.net> Subject: Re: mmap() incoherency on hi I/O load (FS is zfs) Message-ID: <20120614122456.GZ2337@deviant.kiev.zoral.com.ua> In-Reply-To: <893489718.1762311.1339673556220.JavaMail.root@erie.cs.uoguelph.ca> References: <91943.1339669820.1305529125424791552@ffe15.ukr.net> <893489718.1762311.1339673556220.JavaMail.root@erie.cs.uoguelph.ca>
next in thread | previous in thread | raw e-mail | index | archive | help
--tpXmNJJzF/fVXUEg Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Jun 14, 2012 at 07:32:36AM -0400, Rick Macklem wrote: > Pavlo wrote: > > There's a case when some parts of files that are mapped and then > > modified getting corrupted. By corrupted I mean some data is ok (one > > that > > was written using write()/pwrite()) but some looks like it never > > existed. > > Like it was some time in buffers, when several processes > > simultaneously > > (of course access was synchronised) used shared pages and reported > > it's > > existence. But after time pass they (processes) screamed that it is > > now > > lost. Only part of data written with pwrite() was there. Everything > > that > > was written via mmap() is zero. > >=20 > > So as I said it occurs on hi I/O busyness. When in background 4+ > > processes do indexing of huge ammount of data. Also I want to note, it > > never occurred in the life of our project while we used mmap() under > > same I/O stress conditions when mapping was done for a whole file of > > just > > a part(header) starting from a beginning of a file. First time we used > > mapping of individual pages, just to save RAM, and this popped up. > >=20 > > Solution for this problem is msync() before any munmap(). But man > > says: > >=20 > > The msync() system call is usually not needed since BSD implements a > > coherent file system buffer cache. However, it may be used to > > associate > > dirty VM pages with file system buffers and thus cause them to be > > flushed > > to physical media sooner rather than later. > >=20 > > Any thoughts? Thanks. > >=20 > With a recent kernel from head, I am seeing dirty mmap'd pages being writ= ten > quite late for the NFSv4 client. Even after the NFS client VOP_RECLAIM() = has > been called, it seems. I didn't observe this behaviour in a kernel from > head in March. (I don't know enough about the vm/mmap area to know if this > is correct behaviour or not?) >=20 > I thought I'd mention this, since you didn't say how recent a kernel you > were running and thought it might be caused by the same change? Can you, please, comment more on this ? How is this possible at all ? Could you please show at least a backtrace for the moment when a write request is made for the page which belong to already reclaimed vnode ? --tpXmNJJzF/fVXUEg Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (FreeBSD) iEYEARECAAYFAk/Z2BgACgkQC3+MBN1Mb4iu4QCgolZfcId5IWvExU1Bmo/BsQcl itoAniEGa0b85JgpCdPQEaVDU7gMVYju =h/2C -----END PGP SIGNATURE----- --tpXmNJJzF/fVXUEg--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120614122456.GZ2337>