Date: Fri, 9 Feb 2007 17:02:42 -0800 (PST) From: Peter Holmes <peter_holmes2003@yahoo.com> To: freebsd-hackers@freebsd.org Subject: Re: pin/bind a pthread to a processor? (take 2) Message-ID: <368841.53689.qm@web32903.mail.mud.yahoo.com>
next in thread | raw e-mail | index | archive | help
This is something I am interested in doing as well. I had corresponded with Julian Eischen & Daniel Elischer about this. This was suggested by them but I haven't been able to figure it all out. Basically we could create SYSTEM_SCOPE pthreads. This way there is one KSE per pthread. I was hoping I could bind each KSE using sched_bind() - however the kernel doesn't handle this very well. I know this sort of thing isn't for every one but it can be useful as John G. has indicated. Will the gurus here comment on whether this approach will work and what I am doing wrong. Mucho Gracias, Peter List: freebsd-hackers Subject: pin/bind a pthread to a processor? (take 2) From: John Giacomoni <John.Giacomoni () colorado ! edu> Date: 2007-01-30 23:34:28 Message-ID: 722B66D4-4030-4C12-8C8D-8B3288F86498 () colorado ! edu Previously when I asked this question it turned out to not be as necessary as I thought. However, I now need a way to pin/bind a user-space thread to a processor until I'm done with it as my timing constraints are too tight to account for. I checked sys/sched.h, sys/proc.h, pthread.h, and pthread_np.h but it doesn't look like an API to do this was added in 6.2. Can someone point me at a way to hack this in? I'm working on a conference submission and I unfortunately need to pin in user-space as abusing non-preemptive kernel threads is not sufficient for my task. The plan is to have 1-3 threads pinned through the execution of the test (30s - 10min, maybe more) but to leave a cpu untouched so that normal system function can continue on it. When pinning I'd also like to be able to pin to specific processors so I account for the effects placement of different dies, important for my work on dual-processor dual-core AMD systems where IO is routed via hypertransport through the first processor. For those who are interested, this work is focused on building pipeline-parallel systems that overlap sequential work by streaming data through a sequence of processors. One app that I've built with it is to support GigE forwarding at the maximum rate for all frame sizes through user-space. When this work is complete it may be able to help Daniel O'Connor and his question about streaming data from the kernel to userland (1/18/07). Additional information and papers are available at http://www.cs.colorado.edu/~jgiacomo thanks for any help! John G On Sep 8, 2006, at 6:33 PM, David Xu wrote: > On Saturday 09 September 2006 04:18, John Giacomoni wrote: >> Is it possible to bind a pthread to a processor in 5.5 or 6.1? >> >> I currently have a code base that uses libpthread with multiple >> threads, mutexes and condition variables. >> The problem I'm having is that I seem to be suffering wall-clock >> timing aberrations that I believe are introduced by the scheduler. >> >> Thanks, >> >> John G >> >> -- >> >> John.Giacomoni@colorado.edu >> University of Colorado at Boulder >> Department of Computer Science >> Engineering Center, ECCR 1B50 >> 430 UCB >> Boulder, CO 80303-0430 >> USA > > I don't think we have such API allowing you to bind a thread to > a specific CPU, I had implemented such an API for DragonFlyBSD, but > its 1:1 threading is not mature yet. > > David Xu > _______________________________________________ > freebsd-hackers@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers > To unsubscribe, send any mail to "freebsd-hackers- > unsubscribe@freebsd.org" -- John.Giacomoni@colorado.edu University of Colorado at Boulder Department of Computer Science Engineering Center, ECCR 1B50 430 UCB Boulder, CO 80303-0430 USA --------------------------------- The fish are biting. Get more visitors on your site using Yahoo! Search Marketing.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?368841.53689.qm>