From owner-freebsd-smp@FreeBSD.ORG Mon Jul 7 16:33:19 2003 Return-Path: Delivered-To: freebsd-smp@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 0B3D037B404; Mon, 7 Jul 2003 16:33:19 -0700 (PDT) Received: from franky.speednet.com.au (franky.speednet.com.au [203.57.65.5]) by mx1.FreeBSD.org (Postfix) with ESMTP id F226743FAF; Mon, 7 Jul 2003 16:33:17 -0700 (PDT) (envelope-from andyf@speednet.com.au) Received: from hewey.af.speednet.com.au (hewey.af.speednet.com.au [203.38.96.242])h67NXGsw078454; Tue, 8 Jul 2003 09:33:16 +1000 (EST) (envelope-from andyf@speednet.com.au) Received: from hewey.af.speednet.com.au (hewey.af.speednet.com.au [203.38.96.242])h67NXF2b006826; Tue, 8 Jul 2003 09:33:16 +1000 (EST) (envelope-from andyf@speednet.com.au) Date: Tue, 8 Jul 2003 09:33:15 +1000 (EST) From: Andy Farkas X-X-Sender: andyf@hewey.af.speednet.com.au To: freebsd-current@FreeBSD.ORG Message-ID: <20030708090530.T6312-100000@hewey.af.speednet.com.au> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: freebsd-smp@FreeBSD.ORG Subject: whats going on with the scheduler? X-BeenThere: freebsd-smp@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: FreeBSD SMP implementation group List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2003 23:33:19 -0000 FreeBSD 5.1-RELEASE with SCHED_4BSD on a quad ppro 200 (dell 6100/200). Last night I started 3 setiathome's then went to bed. The system was otherwise idle and had a load of 3.00, 3.00, 3.00. This morning, I wanted to copy a (large) file from a remote server, so I did a: scp -c blowfish -p -l 100 remote.host:filename . which is running in another window (and will run for 3 more hours). And now, on my otherwise idle system, the load is varying from less than 2.00 (!) to just over 3.00, with an average average of about 2.50. Here is some output from top: PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND 42946 setiathome 139 15 15524K 14952K *Giant 0 39.9H 89.26% 89.26% setiathome 49332 andyf 130 0 3084K 2176K *Giant 2 81:49 67.68% 67.68% ssh 12 root -16 0 0K 12K CPU2 2 152.1H 49.12% 49.12% idle: cpu2 13 root -16 0 0K 12K CPU1 1 148.7H 44.58% 44.58% idle: cpu1 11 root -16 0 0K 12K RUN 3 152.1H 44.14% 44.14% idle: cpu3 14 root -16 0 0K 12K CPU0 0 143.3H 41.65% 41.65% idle: cpu0 42945 setiathome 129 15 15916K 14700K *Giant 2 39.0H 25.20% 25.20% setiathome 42947 setiathome 129 15 15524K 14956K *Giant 1 40.3H 22.61% 22.61% setiathome So, can someone explain why the seti procs are not getting 100% cpu like they were before the scp(ssh) started and why there is so much idle time? I bet those *Giants have something to do with it... -- :{ andyf@speednet.com.au Andy Farkas System Administrator Speednet Communications http://www.speednet.com.au/