From owner-freebsd-current@FreeBSD.ORG Mon Jul 7 20:53:12 2003 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1C2BC37B401; Mon, 7 Jul 2003 20:53:12 -0700 (PDT) Received: from dan.emsphone.com (dan.emsphone.com [199.67.51.101]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5B06F43F75; Mon, 7 Jul 2003 20:53:11 -0700 (PDT) (envelope-from dan@dan.emsphone.com) Received: (from dan@localhost) by dan.emsphone.com (8.12.9/8.12.9) id h683rAuW005411; Mon, 7 Jul 2003 22:53:10 -0500 (CDT) (envelope-from dan) Date: Mon, 7 Jul 2003 22:53:10 -0500 From: Dan Nelson To: Andy Farkas Message-ID: <20030708035309.GE87950@dan.emsphone.com> References: <20030708090530.T6312-100000@hewey.af.speednet.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20030708090530.T6312-100000@hewey.af.speednet.com.au> X-OS: FreeBSD 5.1-CURRENT X-message-flag: Outlook Error User-Agent: Mutt/1.5.4i cc: freebsd-current@freebsd.org cc: freebsd-smp@freebsd.org Subject: Re: whats going on with the scheduler? X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Jul 2003 03:53:12 -0000 In the last episode (Jul 08), Andy Farkas said: > FreeBSD 5.1-RELEASE with SCHED_4BSD on a quad ppro 200 (dell 6100/200). > > Last night I started 3 setiathome's then went to bed. The system was > otherwise idle and had a load of 3.00, 3.00, 3.00. > > This morning, I wanted to copy a (large) file from a remote server, so I > did a: > > scp -c blowfish -p -l 100 remote.host:filename . > > which is running in another window (and will run for 3 more hours). > > And now, on my otherwise idle system, the load is varying from less > than 2.00 (!) to just over 3.00, with an average average of about > 2.50. > > Here is some output from top: > > PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND > 42946 setiathome 139 15 15524K 14952K *Giant 0 39.9H 89.26% 89.26% setiathome > 49332 andyf 130 0 3084K 2176K *Giant 2 81:49 67.68% 67.68% ssh > 12 root -16 0 0K 12K CPU2 2 152.1H 49.12% 49.12% idle: cpu2 > 13 root -16 0 0K 12K CPU1 1 148.7H 44.58% 44.58% idle: cpu1 > 11 root -16 0 0K 12K RUN 3 152.1H 44.14% 44.14% idle: cpu3 > 14 root -16 0 0K 12K CPU0 0 143.3H 41.65% 41.65% idle: cpu0 > 42945 setiathome 129 15 15916K 14700K *Giant 2 39.0H 25.20% 25.20% setiathome > 42947 setiathome 129 15 15524K 14956K *Giant 1 40.3H 22.61% 22.61% setiathome > > So, can someone explain why the seti procs are not getting 100% cpu like > they were before the scp(ssh) started and why there is so much idle time? > I bet those *Giants have something to do with it... Most likely. That means they're waiting for some other process to release the big Giant kernel lock. Paste in top's header so we can see how many processes are locked, and what the system cpu percentage is. A truss of one of the seti processes may be useful too. setiathome really shouldn't be doing many syscalls at all. -- Dan Nelson dnelson@allantgroup.com