Date: Wed, 25 Apr 2012 22:51:08 +0000 (UTC) From: Ricardo Nabinger Sanchez <rnsanchez@wait4.org> To: freebsd-threads@freebsd.org Subject: Re: About the memory barrier in BSD libc Message-ID: <jn9v4s$hij$1@dough.gmane.org> References: <CAPHpMu=DOGQ=TuFeYH7bH8hVwteT4Q3k67-mvoOFob6P3Y506w@mail.gmail.com> <20120423084120.GD76983@zxy.spb.ru>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 23 Apr 2012 12:41:20 +0400, Slawa Olhovchenkov wrote: > /usr/include/machine/atomic.h: > > #define mb() __asm __volatile("lock; addl $0,(%%esp)" : : : "memory") > #define wmb() __asm __volatile("lock; addl $0,(%%esp)" : : : "memory") > #define rmb() __asm __volatile("lock; addl $0,(%%esp)" : : : "memory") Somewhat late on this topic, but I'd like to understand why issue a write on %esp, which would invalidate (%esp) on other cores --- thus forcing a miss on them? Instead, why not issue "mfence" (mb), "sfence" (wmb), and "lfence" (rmb)? Cheers -- Ricardo Nabinger Sanchez http://rnsanchez.wait4.org/ "Left to themselves, things tend to go from bad to worse."
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?jn9v4s$hij$1>