From owner-freebsd-current@FreeBSD.ORG Thu May 28 09:43:33 2009 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 331B6106564A; Thu, 28 May 2009 09:43:33 +0000 (UTC) (envelope-from randy@psg.com) Received: from ran.psg.com (ran.psg.com [IPv6:2001:418:1::36]) by mx1.freebsd.org (Postfix) with ESMTP id 068548FC19; Thu, 28 May 2009 09:43:33 +0000 (UTC) (envelope-from randy@psg.com) Received: from localhost ([127.0.0.1] helo=rmac.psg.com) by ran.psg.com with esmtp (Exim 4.69 (FreeBSD)) (envelope-from ) id 1M9c95-00036B-Si; Thu, 28 May 2009 09:43:32 +0000 Received: from rmac.local.psg.com (localhost [127.0.0.1]) by rmac.psg.com (Postfix) with ESMTP id 6520D1B21065; Thu, 28 May 2009 18:43:31 +0900 (JST) Date: Thu, 28 May 2009 18:43:30 +0900 Message-ID: From: Randy Bush To: Kip Macy In-Reply-To: <3c1674c90905280053i1089c4bbre4ca1cc0a5ccb052@mail.gmail.com> References: <3c1674c90905262113x127ad54ex8672ce8cbbf7eb1c@mail.gmail.com> <3c1674c90905262203o66064f1m7797f1e0f8f370c2@mail.gmail.com> <3c1674c90905280053i1089c4bbre4ca1cc0a5ccb052@mail.gmail.com> User-Agent: Wanderlust/2.15.5 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.7 Emacs/22.3 (i386-apple-darwin9.6.0) MULE/5.0 (SAKAKI) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: FreeBSD Current Subject: Re: kern/134011 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 May 2009 09:43:33 -0000 >>>>> Which arch? >>>> amd64 and i386 >>>>> How much memory? >>>> 4g in all cases but one. =A0that is 1g >>> You're having problems with both architectures and with 4g? >> yep. =A0i am presuming that it is some kernel or other config aspect. > What type of hard drives? a zfs system ad4: 305245MB at ata2-master SATA150 ad6: 305245MB at ata3-master SATA150 ad8: 305245MB at ata4-master SATA150 ad10: 305245MB at ata5-master SATA150 a gmirror system ad4: 238475MB at ata2-master SATA150 ad5: 238475MB at ata2-slave SATA150 ad6: 238475MB at ata3-master SATA150 ad7: 238475MB at ata3-slave SATA150 > How big are your zpools? again, this is happening on non-zfs systems as well. i do not think this is zfs related. but the zfs system is the one with the worst lockups. it looks like Filesystem 1024-blocks Used Avail Capacity Mounted on /dev/mirror/boota 8122126 636960 6835396 9% / devfs 1 1 0 100% /dev procfs 4 4 0 100% /proc tank/data 653313024 0 653313024 0% /data tank/data/nfsen 845243776 191930752 653313024 23% /data/nfsen tank/data/rpki 653494144 181120 653313024 0% /data/rpki tank 653313024 0 653313024 0% /tank tank/usr 658919040 5606016 653313024 1% /usr tank/usr/home 660368256 7055232 653313024 1% /usr/home tank/usr/usr 658758144 5445120 653313024 1% /usr/usr tank/var 654433024 1120000 653313024 0% /var tank/var/log 653400960 87936 653313024 0% /var/log tank/var/spool 653337088 24064 653313024 0% /var/spool /dev/md0 253678 14 233370 0% /tmp devfs 1 1 0 100% /data/rpki/rcyn= ic/dev > Do you use compression? nope randy