Date: Mon, 17 Jun 2002 18:04:33 +0900 From: Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp> To: Jeffrey Hsu <hsu@FreeBSD.org> Cc: Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>, jhb@FreeBSD.org, smp@FreeBSD.org, yangjihui@yahoo.com Subject: Re: Sharing a single mutex between a socket and its PCB Message-ID: <200206170904.g5H94X3i076651@rina.r.dl.itc.u-tokyo.ac.jp> In-Reply-To: <0GXS004W453AEO@mta5.snfc21.pbi.net> References: <tanimura@r.dl.itc.u-tokyo.ac.jp> <200206151545.g5FFipAY006726@silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp> <0GXS004W453AEO@mta5.snfc21.pbi.net>
index | next in thread | previous in thread | raw e-mail
On Sat, 15 Jun 2002 20:45:19 -0700,
Jeffrey Hsu <hsu@FreeBSD.org> said:
>> Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp> writes:
>> As some socket operations (eg sosend(), soreceive(), ...) modify both
>> a socket and its PCB at once, both of them should be locked by a
>> single mutex. Since hsu has already locked down struct inpcb, I would
>> like to protect a socket by the mutex of the PCB.
>> In order for the socket subsystem to lock and unlock opaquely, two new
>> usrreq methods will be added:
>> - pru_lock() locks the PCB of a socket.
>> - pru_unlock() unlocks the PCB of a socket.
>> If the PCB has its own mutex, those methods simply lock and unlock the
>> mutex. Otherwise, those methods lock and unlock the Giant lock. This
>> is so that we can push down Giant for the socket subsystem later.
>> Comments?
hsu> Let's stick with the BSD/OS design, which is to have a separate socket buffer
hsu> lock. It's better for concurrency this way. (BSD/OS also shows there's no
hsu> need to have a separate socket lock. The socket buffer lock doubles as a
hsu> socket lock.)
It helps the lock order issue of a socket and PCB lock to export PCB's
mutex. You never have to release a socket lock to acquire a PCB lock
as BSD/OS does:
tcp_usrreq(so)
{
sb = so->so_buf;
SOCKBUF_UNLOCK(sb);
/* XXX What if someone modified the socket? */
INP_LOCK(inp);
SOCKBUF_LOCK(sb);
tcp_do_usrreq();
INP_UNLOCK(inp);
}
Speaking of concurrency, it does not scale to run only one net swi
thread on an MP machine. Maybe we can make a pool of net swi threads
and run them in parallel.
--
Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp> <tanimura@FreeBSD.org>
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message
help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200206170904.g5H94X3i076651>
