From owner-freebsd-hpc@FreeBSD.ORG Tue Feb 14 05:28:54 2006 Return-Path: X-Original-To: freebsd-hpc@freebsd.org Delivered-To: freebsd-hpc@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 92B6D16A420; Tue, 14 Feb 2006 05:28:54 +0000 (GMT) (envelope-from orac000@internet-mail.org) Received: from out4.smtp.messagingengine.com (out4.smtp.messagingengine.com [66.111.4.28]) by mx1.FreeBSD.org (Postfix) with ESMTP id 294E643D45; Tue, 14 Feb 2006 05:28:54 +0000 (GMT) (envelope-from orac000@internet-mail.org) Received: from frontend1.internal (mysql-sessions.internal [10.202.2.149]) by frontend1.messagingengine.com (Postfix) with ESMTP id 962E8D3427D; Tue, 14 Feb 2006 00:28:53 -0500 (EST) Received: from web3.messagingengine.com ([10.202.2.212]) by frontend1.internal (MEProxy); Tue, 14 Feb 2006 00:28:53 -0500 Received: by web3.messagingengine.com (Postfix, from userid 99) id 754FA25AB0; Tue, 14 Feb 2006 00:28:53 -0500 (EST) Message-Id: <1139894933.16845.254309588@webmail.messagingengine.com> X-Sasl-Enc: uLDuAxASl2Rdl8jj4gYfz3tiG1Z0GBE0RocwiEOa8orb 1139894933 From: "Aluminium Oxide" To: freebsd-hpc@freebsd.org, mpi-comments@mpi-forum.org Content-Disposition: inline Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="ISO-8859-1" MIME-Version: 1.0 X-Mailer: MIME::Lite 5022 (F2.73; T1.15; A1.64; B3.05; Q3.03) Date: Tue, 14 Feb 2006 15:58:53 +1030 Cc: freebsd-current@freebsd.org Subject: HPC: Using Message Passing to distribute threads X-BeenThere: freebsd-hpc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "FreeBSD in High Performance Computing environments." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Feb 2006 05:28:54 -0000 Forgive me if I am suggesting that we reinvent the wheel, but I have a problem with a potentially simple solution. It concerns the difficulty of adapting an application to use a parallel computing system, such as with MPI or PVM. I would like help possible to write a simple (heh heh) compiler directive, or header, or a wrapper function which allows one to add a tag or wrap a function call to a function which will be called iteratively to spawn not just a new thread, but a new thread ***which can be passed to another node*** in a parallel computer system? This seems like a very simple and elegant method by which non-parallelised code can be adapted to a parallel architecture. My C, and my understanding of threading is very limited, and I've never written any kernel code. However, I will try and give an example: The adaption process would simply become o #include /* Add support for a parallel computing thread call */ o locate higher-level functions which are computationally intensive and will be called iteratively; o replace the raw function call with a pvmwrapped call. Eg., /*A module to calculate n! for the first 1000 numbers*/ int number; long double number_factorial; long double factorial (int number) {.....} ..... scanf("%d",number); for (i=0;i==number;i++,number_factorial=(factorial(number))) { printf ("%d factorial = %d",number_factorial; } .... would become #include int i,number; long double number_factorial; long double factorial (int number) {.....} ..... scanf("%d",number); for (i=0;i==number;i++,pvmwrap(number_factorial=(factorial(number)))) { printf ("%d factorial = %d",pvmwrap(factorial(number); } .... pvmwrap would have the necessary calls via the message passing protocol to create the thread on the next available node, rather than on the local system, and return the result to the caller. pvmwrap will need to perform type identification of the variables, or targets of pointers, and then declare these on the executing node first, to permit execution without having to specifically code these declarations as parallelised (which would greatly complicate the adaption to parallelism). The few cycles used to perform these type identifications each iteration are negligible compared with those of the wrapped function itself. What say ye? Damien Miller =================================== Sub POSIX lumen orac000@internet-mail.org +61 422 921 498 au.geocities.com/orac000000/bsd.html =================================== -- Aluminium Oxide orac000@internet-mail.org -- http://www.fastmail.fm - Choose from over 50 domains or use your own