From owner-freebsd-hackers Tue Jan 25 8:48:21 2000 Delivered-To: freebsd-hackers@freebsd.org Received: from bomber.avantgo.com (ws1.avantgo.com [207.214.200.194]) by hub.freebsd.org (Postfix) with ESMTP id EF77B14EB8 for ; Tue, 25 Jan 2000 08:48:12 -0800 (PST) (envelope-from scott@avantgo.com) Received: from river ([10.0.128.30]) by bomber.avantgo.com (Netscape Messaging Server 3.5) with SMTP id 214; Tue, 25 Jan 2000 08:44:12 -0800 Message-ID: <0c2101bf6753$cf37f280$1e80000a@avantgo.com> From: "Scott Hess" To: "Scott Hess" , "Matthew Dillon" Cc: References: <01b601bf6696$60701930$1e80000a@avantgo.com> <200001241939.LAA91219@apollo.backplane.com> <0be801bf6715$601423d0$1e80000a@avantgo.com> Subject: Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs. Date: Tue, 25 Jan 2000 08:47:03 -0800 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0C1E_01BF6710.C075B360" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 5.00.2919.6600 X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2919.6600 Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG This is a multi-part message in MIME format. ------=_NextPart_000_0C1E_01BF6710.C075B360 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit "Scott Hess" wrote: > "Matthew Dillon" wrote: > > :Unfortunately, I've found that having a group of processes reading > > :from a group of socketpairs has better performance than having > > :them all read from a single socketpair. I've been unable to > > :determine why. > > > > The problem is that when you have N processes waiting on a single > > socket and you write to the socket, all N processes will wake up even > > though only one will get the data you wrote. > > > As an alternative to socket pairs, I would consider using SysV shared > > memory and SysV semaphores. > > OK, so let's say I did spend some time implementing it in terms of semget() > and semop(). Would you be totally apalled if the performance turned out to > be about the same as using a single socketpair? >Unfortunately, I'll have to wait until tomorrow morning >to rip things out and make a suitable example program for posting. Find attached a new version of commtest.c which uses semaphores. I've also placed a copy at http://www.doubleu.com/~scott/commtest.c. The performance is identical enough that I'm guessing that semaphores must suffer from the exact same problems as the single socketpair version. Thanks, scott ------=_NextPart_000_0C1E_01BF6710.C075B360 Content-Type: application/octet-stream; name="commtest.c" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="commtest.c" // commtest.c=0A= // gcc -Wall -g -o commtest commtest.c=0A= //=0A= // Test performance differences for multiple socketpairs versus a=0A= // single shared socketpair versus SYSV semaphores.=0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= #include =0A= =0A= typedef unsigned char request_t;=0A= =0A= #define CLIENT_EXIT ((request_t)(~0))=0A= #define CLIENT_COUNT 32=0A= #define REQUEST_TARGET 10000=0A= =0A= int client_fd_count=3D0;=0A= int client_fds[ CLIENT_COUNT];=0A= int server_fds[ CLIENT_COUNT];=0A= =0A= /* Reflect requests. */=0A= void socket_client( int fd)=0A= {=0A= request_t request;=0A= int rc;=0A= =0A= while( 1) {=0A= if( (rc=3Dread( fd, &request, sizeof( request)))=3D=3D-1) {=0A= perror( "client read");=0A= _exit( 1);=0A= } else if( rc-1) {=0A= sem=3Dsemget( IPC_PRIVATE, sem_count, SEM_R|SEM_A);=0A= if( sem=3D=3D-1) {=0A= perror( "semget");=0A= exit( 1);=0A= }=0A= }=0A= =0A= maxfd=3D0;=0A= FD_ZERO( &default_fdset);=0A= for( ii=3D0; iimaxfd) {=0A= maxfd=3Dclient_fds[ ii];=0A= }=0A= }=0A= =0A= /* Spin off children to process requests. */=0A= for( ii=3D0; ii