From owner-freebsd-current@FreeBSD.ORG Mon Jan 23 22:45:47 2006 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 27A0E16A41F; Mon, 23 Jan 2006 22:45:47 +0000 (GMT) (envelope-from julian@elischer.org) Received: from a50.ironport.com (a50.ironport.com [63.251.108.112]) by mx1.FreeBSD.org (Postfix) with ESMTP id BFEB544330; Mon, 23 Jan 2006 22:45:46 +0000 (GMT) (envelope-from julian@elischer.org) Received: from unknown (HELO [10.251.17.229]) ([10.251.17.229]) by a50.ironport.com with ESMTP; 23 Jan 2006 14:45:46 -0800 X-IronPort-Anti-Spam-Filtered: true Message-ID: <43D55C9A.9010801@elischer.org> Date: Mon, 23 Jan 2006 14:45:46 -0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.7.11) Gecko/20050727 X-Accept-Language: en-us, en MIME-Version: 1.0 To: John Baldwin References: <43D05151.5070409@elischer.org> <200601231616.49140.jhb@freebsd.org> <43D55739.80608@elischer.org> <200601231739.02247.jhb@freebsd.org> In-Reply-To: <200601231739.02247.jhb@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-current@freebsd.org Subject: Re: kernel thread as real threads.. X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Jan 2006 22:45:47 -0000 John Baldwin wrote: >On Monday 23 January 2006 17:22, Julian Elischer wrote: > > >>John Baldwin wrote: >> >> >>>On Thursday 19 January 2006 21:56, Julian Elischer wrote: >>> >>> >>>>some progrsss.. >>>>as the first few lines show, it's not quite perfect yet but it's most of >>>>the way there.. >>>>(Like proc 1 isn't init) >>>> >>>> >>>One other note, watch out for the AIO daemons. They have to be kernel >>>procs and not kthreads because they borrow the vmspace of the user >>>process when performing AIO on another process' behalf. >>> >>> >>yeah I found that and the patches account for that. >> >>However I would like to suggest that we change the way that aio works.. >> >>My suggestion is that when a process does AIO, that we "fork a ksegroup" >>and attach it to the >>process, and assign it a (or some) worker thread to do the aio work. >>The userland process would >>be oblivious of the extra (kernel) threads in that kseg and they would >>be independently schedulable. >>They would however automatically have full access to the correct address >>space. >> >> > >That's probably a better model, yes. One thing I would prefer though is if we >could limit the knowledge of ksegroups to the scheduler as much as possible >and let the rest of the kernel just deal with threads and processes. >Ideally, we'd reach the point where you have an API to say "create a thread >for process p" and kthreads just use a kernel process for 'p' and maybe the >API takes a flag to say if a thread is separate or not. Really, aio should >probably just be separate system scope threads, and you could almost do that >in userland now. > > the aim of ksegroups is simply as containers sothat you can group threads from a scheduler perspective. The kernel threads code I have in the patch automatically creates system sscope threads by associating them each with a separate kseg (ksegs are small) however it might by fairer to group multiple aio threads into a single ksegrp so that you don't flood the system if you make a lot of them.. of course that brings up teh question as to whether a process doing AIO would require just a single thread or a bunch if them.. (do you want a separate thread per aio request or to multiplex them?) multiplexing them saves resources (and may be more extensible) but having a single blocking thread per request would be really simple and easy to debug.... I guess the current multiplexeing code would work for multiplexed threads. you could just drop the vm futzing code.