From owner-freebsd-threads@freebsd.org Fri May 13 15:37:19 2016 Return-Path: Delivered-To: freebsd-threads@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BC2B4B39DF3 for ; Fri, 13 May 2016 15:37:19 +0000 (UTC) (envelope-from jilles@stack.nl) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id AA56C1C7B for ; Fri, 13 May 2016 15:37:19 +0000 (UTC) (envelope-from jilles@stack.nl) Received: by mailman.ysv.freebsd.org (Postfix) id A33DFB39DF1; Fri, 13 May 2016 15:37:19 +0000 (UTC) Delivered-To: threads@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A2796B39DEF; Fri, 13 May 2016 15:37:19 +0000 (UTC) (envelope-from jilles@stack.nl) Received: from mx1.stack.nl (relay02.stack.nl [IPv6:2001:610:1108:5010::104]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client CN "mailhost.stack.nl", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 3AD101C70; Fri, 13 May 2016 15:37:19 +0000 (UTC) (envelope-from jilles@stack.nl) Received: from toad2.stack.nl (toad2.stack.nl [IPv6:2001:610:1108:5010::161]) by mx1.stack.nl (Postfix) with ESMTP id 0A28E358C68; Fri, 13 May 2016 17:37:16 +0200 (CEST) Received: by toad2.stack.nl (Postfix, from userid 1677) id 7228F892FF; Fri, 13 May 2016 17:37:16 +0200 (CEST) Date: Fri, 13 May 2016 17:37:16 +0200 From: Jilles Tjoelker To: Konstantin Belousov Cc: threads@freebsd.org, arch@freebsd.org Subject: Re: Robust mutexes implementation Message-ID: <20160513153716.GA30576@stack.nl> References: <20160505131029.GE2422@kib.kiev.ua> <20160506233011.GA99994@stack.nl> <20160507165956.GC89104@kib.kiev.ua> <20160508125222.GA48862@stack.nl> <20160509025107.GN89104@kib.kiev.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160509025107.GN89104@kib.kiev.ua> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-threads@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Threading on FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 May 2016 15:37:19 -0000 On Mon, May 09, 2016 at 05:51:07AM +0300, Konstantin Belousov wrote: > On Sun, May 08, 2016 at 02:52:22PM +0200, Jilles Tjoelker wrote: > > OK. The patch still initializes umtx_shm_vnobj_persistent to 1 though. > > There is also a leak if umtx_shm_vnobj_persistent is toggled from 1 to 0 > > while an unmapped object with an off-page is active. > [snip] > > All this is POSIX-compliant since POSIX specifies that the state of > > synchronization objects becomes undefined on last unmap, and our > > implementation fundamentally depends on that possibility. Unfortunately, > Could you, please, point me to the exact place in the standard where > this is allowed ? The mmap() page in POSIX.1-2008tc1 XSH 3 has: ] The state of synchronization objects such as mutexes, semaphores, ] barriers, and conditional variables placed in shared memory mapped ] with MAP_SHARED becomes undefined when the last region in any process ] containing the synchronization object is unmapped. This is new in issue 7 (SUSv4): ] Austin Group Interpretations 1003.1-2001 #078 and #079 are applied, ] clarifying page alignment requirements and adding a note about the ] state of synchronization objects becoming undefined when a shared ] region is unmapped. > > Linux and Solaris do not need the possibility. The automatic > > re-initialization and umtx_vnode_persistent are just hacks that make > > certain applications almost always work (but not always, and in such > > cases it will be hard to debug). > > Another issue with umtx_vnode_persistent is that it can hide high memory > > usage. Filling up a page with pthread_mutex_t will create many pages > > full of actual mutexes. This memory usage is only visible as long as it > > is still mapped somewhere. > There is already a resource limit for the number of pshared locks per > uid, RLIMIT_UMTXP. When exceeded, user would get somewhat obscure > failure mode, but excessive memory consumption is not allowed. And I > think that vmstat -o would give enough info to diagnose, except that > users must know about it and be quialified enough to interpret the > output. Hmm, OK. > > Apart from that, umtx_vnode_persistent can (at least conceptually) work > > fully reliably for shared memory objects and tmpfs files, which do not > > have persistent storage. > I changed defaults for the umtx_vnode_persistent to 0 in the published > patch. OK. > > Hmm, libthr2 or non-standard synchronization primitive implementations > > seem a good reason to not check for umtx shm page. > > However, the existing checks can be made stricter. The umtx_handle_rb() > > from robust.3.patch will use m_rb_lnk with no validation at all except > > that it is a valid pointer. However, if the UMUTEX_ROBUST flag is not > > set, the mutex should not have been in this list at all and it is > > probably safer to ignore m_rb_lnk. > Ok, I changed the code to consider lack of UMUTEX_ROBUST as a stopper > for the list walk. Also, I stop the walk if mutex is not owned by > the current thread, except when the mutex was stored in inact slot. > The same piece of changes hopefully fixes list walk for COMPAT32 on > big-endian machines. OK. > > There is a difference between chunked allocations and the current > > m_rb_lnk in that the list would reside in local memory, not vulnerable > > to other processes scribbling over it. This is probably not a major > > issue since sharing a mutex already allows threads to block each other > > indefinitely. > I would easily deletegate the chunked array to some future reimplementation > if not the ABI issue. Still, I do not like it. An array only works well for this if you know beforehand how long it needs to be, and I don't think we can do this since Linux's limit is so high that an array would waste a lot of memory. The existence of some limit is, however, unavoidable and it could be considered a bug that pthread_mutex_lock() for a robust mutex returns success even if it will not fulfill its promise to do the EOWNERDEAD thing. > Current updates to the patch https://kib.kiev.ua/kib/pshared/robust.4.patch -- Jilles Tjoelker