From owner-svn-src-head@FreeBSD.ORG Fri Sep 14 05:48:11 2012 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 985A21065679; Fri, 14 Sep 2012 05:48:11 +0000 (UTC) (envelope-from bryanv@daemoninthecloset.org) Received: from torment.daemoninthecloset.org (torment.daemoninthecloset.org [94.242.209.234]) by mx1.freebsd.org (Postfix) with ESMTP id 983258FC1B; Fri, 14 Sep 2012 05:48:10 +0000 (UTC) X-Virus-Scanned: amavisd-new at daemoninthecloset.org Received: from sage.daemoninthecloset.org (sage.daemoninthecloset.org [127.0.1.1]) by sage.daemoninthecloset.org (Postfix) with ESMTP id C279B673EC; Fri, 14 Sep 2012 00:47:52 -0500 (CDT) Date: Fri, 14 Sep 2012 00:47:52 -0500 (CDT) From: Bryan Venteicher To: John Baldwin Message-ID: <2045684227.3044.1347601672495.JavaMail.root@daemoninthecloset.org> In-Reply-To: <201209131456.03422.jhb@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [192.168.10.20] X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - GC20 ([unknown])/7.2.0_GA_2669) Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org, Peter Grehan Subject: Re: svn commit: r240427 - head/sys/dev/virtio X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Sep 2012 05:48:11 -0000 Hi ----- Original Message ----- > From: "John Baldwin" > To: "Bryan Venteicher" > Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org, "Peter Grehan" > > Sent: Thursday, September 13, 2012 1:56:03 PM > Subject: Re: svn commit: r240427 - head/sys/dev/virtio > > On Thursday, September 13, 2012 12:40:42 pm Bryan Venteicher wrote: > > > Would it be possible to use atomic_load/store() instead of direct > > > memory barriers? For example: > > > > > > > I've been sitting on a (lightly tested) patch [1] for awhile that > > does just that, but am not very happy with it. A lot of the fields > > are 16-bit, which not all architectures have atomic(9) support for. > > And I think the atomic(9) behavior on UP kernels does not provide > > the same guarantees as on an SMP kernel (could have an UP kernel > > on an SMP host). > > That is the one thing I was worried about (the fields being defined > to be 16-bit). I presume that is required by the virtio de facto > standard? Shame we can't clue-by-four people putting 16-bit fields > in these sort of things. :-P > Yes, the 16-bit fields are mandated by the VirtIO spec. The guest/host shared memory is rounded up to next full page, so there actually isn't any memory savings for typical queue sizes. Doubt it is any worse than actual hardware regardless. > > I also found myself wanting an atomic_load_rel_*() type function. > > That would be odd I think. _rel barriers only affect stores, so > there would be no defined ordering between the load and the > subsequent stores. (With our current definitions of _acq and > _rel.) If you need a full fence for some reason, than a plain > mb() may be the best thing in that case. > I'm able to batch add descriptors (via vq_ring_update_avail()), but when checking if I must notify the host, I need to make sure the latest avail->idx is visible before checking the flag from the host on whether notifications are disabled. Gratuitous notifications are fine, but skipping one is not. In the patch, I kludge this with: atomic_add_rel_16(&flags, 0); foo = flags; Hoping the dependency would prevent the assignment to foo from floating above the atomic_add_rel(). I originally did the atomic(9) work just to see if there would be any performance difference between the two - I wasn't able to measure any, but I don't have the most modern hardware either. Bryan > -- > John Baldwin >