From owner-freebsd-threads@freebsd.org Fri Feb 12 23:24:44 2016 Return-Path: Delivered-To: freebsd-threads@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 508EAAA72EA for ; Fri, 12 Feb 2016 23:24:44 +0000 (UTC) (envelope-from vangyzen@FreeBSD.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 431C99A for ; Fri, 12 Feb 2016 23:24:44 +0000 (UTC) (envelope-from vangyzen@FreeBSD.org) Received: by mailman.ysv.freebsd.org (Postfix) id 3DBE3AA72E8; Fri, 12 Feb 2016 23:24:44 +0000 (UTC) Delivered-To: threads@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3CEBFAA72E6; Fri, 12 Feb 2016 23:24:44 +0000 (UTC) (envelope-from vangyzen@FreeBSD.org) Received: from smtp.vangyzen.net (hotblack.vangyzen.net [IPv6:2607:fc50:1000:7400:216:3eff:fe72:314f]) by mx1.freebsd.org (Postfix) with ESMTP id 2989D97; Fri, 12 Feb 2016 23:24:44 +0000 (UTC) (envelope-from vangyzen@FreeBSD.org) Received: from sweettea.beer.town (unknown [76.164.8.130]) by smtp.vangyzen.net (Postfix) with ESMTPSA id A01C456497; Fri, 12 Feb 2016 17:24:43 -0600 (CST) Subject: Re: libthr shared locks References: <20151223172528.GT3625@kib.kiev.ua> From: Eric van Gyzen X-Enigmail-Draft-Status: N1110 To: Konstantin Belousov , threads@freebsd.org, arch@freebsd.org Message-ID: <56BE69B8.9020808@FreeBSD.org> Date: Fri, 12 Feb 2016 17:24:40 -0600 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 MIME-Version: 1.0 In-Reply-To: <20151223172528.GT3625@kib.kiev.ua> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-threads@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Threading on FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Feb 2016 23:24:44 -0000 On 12/23/2015 11:25, Konstantin Belousov wrote: > The implementation in the patch > https://www.kib.kiev.ua/kib/pshared/pshared.1.patch > gives shared mutexes, condvars, rwlocks and barriers. I reviewed everything except kern_umtx.c, which I plan to review on Monday. Here are my comments so far. * thr_mutex.c Thank you for converting some macros to functions. I find functions much cleaner and easier to read and debug. * thr_mutex.c line 116 The parentheses around (m) can be removed now. * thr_mutex.c lines 331-333 m->m_qe.tqe_prev = TAILQ_NEXT(last_priv, m_qe)-> m_qe.tqe_prev; This seems to read the m_qe.tqe_prev field from a shared mutex. Is that safe? It seems like a race. The following would seem more direct, avoiding the shared mutex: m->m_qe.tqe_prev = &TAILQ_NEXT(last_prev, m_qe); * thr_mutex.c line 354 *(q)->tqh_last = last_priv; This seems to modify the tqe_next field in a shared mutex. Is that safe? Furthermore, that mutex was/is the last on the list, but we seem to set its tqe_next pointer to an earlier element, creating a cycle in the list. * thr_mutex.c line 451 __pthread_mutex_trylock() calls __thr_pshared_offpage() twice [for pshared mutexes]. You could eliminate one call by moving mutex_trylock_common() inline. * thr_pshared.c line 165 res = NULL seems unnecessary. * thr_pshared.c In __thr_pshared_offpage(), can pshared_lookup() fail in the !doalloc case? pshared_hash seems to be an authority, not just an optimization. I ask so that I can understand the code and more effectively review it. In particular, if pshared_lookup() fails and UMTX_SHM_LOOKUP succeeds, is it possible to get multiple mappings for the same shm object? * thr_barrier.c line 110 if (bar == NULL) return (EFAULT); POSIX does not mention EFAULT. Should we return ENOMEM, or can we "extend" the standard? (Ditto for all other _init functions.) * thr_cond.c line 106 You could use cattr instead of the ?: expression. * thr_rwlock.c rwlock_init() assumes that __thr_pshared_offpage() does not fail.