Date: Wed, 21 Jun 2006 23:00:52 +0100 (BST) From: Robert Watson <rwatson@FreeBSD.org> To: John Baldwin <jhb@freebsd.org> Cc: Paul Allen <nospam@ugcs.caltech.edu>, freebsd-current@freebsd.org, current@freebsd.org Subject: Re: FILEDESC_LOCK() implementation Message-ID: <20060621225953.U8526@fledge.watson.org> In-Reply-To: <200606211745.42525.jhb@freebsd.org> References: <20060612054115.GA42379@xor.obsecurity.org> <20060621201927.GJ28128@groat.ugcs.caltech.edu> <20060621214346.G8526@fledge.watson.org> <200606211745.42525.jhb@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 21 Jun 2006, John Baldwin wrote: >> The problem is this: when you have threads in the same process, file >> descriptor lookup is performed against a common file descriptor array. >> That array is protected by a lock, the filedesc lock. When lots of threads >> simultaneously perform file descriptor operations, they contend on the file >> descriptor array lock. So if you have 30 threads all doing I/O, they are >> constantly looking up file descriptors and bumping into each other. This >> is particularly noticeable for network workloads, where many operations are >> very fast, and so they occur in significant quantity. The M:N threading >> library actually handles this quite well by bounding the number of threads >> trying to acquire the lock to the number of processors, but with libthr you >> get pretty bad performance. This contention problem also affects MySQL, >> etc. >> >> You can imagine a number of ways to work on this, but it's a tricky problem >> that has to be looked at carefully. > > Are the lookup operations using a shared lock so that only things like open > and close would actually contend? I'm not sure anyone has tried that. The semantics of the filedesc lock seem a bit complicated, I don't remember why that is right now. Robert N M Watson Computer Laboratory University of Cambridge
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060621225953.U8526>