Date: Wed, 2 Apr 2003 18:10:14 -0800 (PST) From: Julian Elischer <julian@elischer.org> To: Matthew Dillon <dillon@apollo.backplane.com> Cc: current@freebsd.org Subject: Re: libthr and 1:1 threading. Message-ID: <Pine.BSF.4.21.0304021803120.16840-100000@InterJet.elischer.org> In-Reply-To: <200304030157.h331veVm087635@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
A thought on 'fixing AIO..' On Wed, 2 Apr 2003, Matthew Dillon wrote: > > A better solution would be to implement a new system call, similar to > pread(), which simply checks the buffer cache and returns a short read > or an error if the data is not present. If the call fails you would > then know that reading that data would block in the disk subsystem and > you could back-off to a more expensive mechanism like AIO. If want > to select() on it you would then simply use kqueue with EVFILT_AIO and > AIO. A system call pread_cache(), or perhaps we could even use > recvmsg() with a flag. Such an interface would not have to touch the > filesystem code, only the buffer cache and the VM page cache, and > could be implemented in less then a day. > Just as a point of interest, we now have the ability for a non-threaded program to have several threads in the kernel.. By this I mean, it would be theoretically possible to re-implement aioread() in terms of some background threads (doing synchronous IO) in the kernel, that the program is not aware of.. We don't have this hapen at teh moment.. (hmm actually we do but...only in KSE programs) but we have the infrastructure that would allow it to be done by someone who has a spare day or so.. Basically teh aioread would return, but the process would have left a worker thread in the kernel, completing the work, and since the thread is attached to the process, when it is reactivated on data arrival, the correct address space would be there automatically.. All 'exit' cases would be handled automatically.. etc. etc.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0304021803120.16840-100000>