From owner-freebsd-performance@FreeBSD.ORG Sun Aug 9 04:06:55 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8BE40106564A for ; Sun, 9 Aug 2009 04:06:55 +0000 (UTC) (envelope-from freebsd@sopwith.solgatos.com) Received: from sopwith.solgatos.com (pool-173-50-131-130.ptldor.fios.verizon.net [173.50.131.130]) by mx1.freebsd.org (Postfix) with ESMTP id 7E5BB8FC0A for ; Sun, 9 Aug 2009 04:06:53 +0000 (UTC) Received: by sopwith.solgatos.com (Postfix, from userid 66) id C7B4CB64F; Sat, 8 Aug 2009 20:36:45 -0700 (PDT) Received: from localhost by sopwith.solgatos.com (8.8.8/6.24) id EAA08610; Sun, 9 Aug 2009 04:02:43 GMT Message-Id: <200908090402.EAA08610@sopwith.solgatos.com> To: freebsd-performance@freebsd.org In-reply-to: Your message of "Sat, 08 Aug 2009 05:02:48 EDT." Date: Sat, 08 Aug 2009 21:02:43 PDT From: Dieter Subject: Re: RELENG_7 heavy disk = system crawls X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 09 Aug 2009 04:06:55 -0000 > I can dd if=/dev/ad[n].eli of=/dev/null bs=1m and use 75% system > all in geli, 27% disk busy, 20MiB/sec. Interface was slower but > reasonable. I think I understand now. You're doing encryption in the kernel, which eats a lot of cpu, and nice only affects userland. So yeah cpu is a significant part of your problem. > I'm not sure yet how to isolate cpu from i/o under my geli+zfs > setup. I think they're mated together. Agreed. > It's just that this workload has really put the screws to things > and I don't see a way out. I'd like to deploy geli+zfs everywhere > but if I can't login remotely because some user has it busied out > on something I've no knobs to control, umm, yeah :) Do you *need* geli+zfs ? If so, you could see if there are any hardware crypto accellerators with FreeBSD support, or throw lots of cpu (e.g. phenom2 x4) at it. > As to your i/o thing, I think back in RELENG_4 that if all the > spindles were on the same pata controller/interrupt, monopolistic > loads could occur. atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xe000-0xe00f at device 6.0 on pci0 atapci1: port 0x9f0-0x9f7,0xbf0-0xbf3,0x970-0x977,0xb70-0xb73,0xcc00-0xcc0f mem 0xfebfb000-0xfebfbfff irq 21 at device 7.0 on pci0 atapci2: port 0x9e0-0x9e7,0xbe0-0xbe3,0x960-0x967,0xb60-0xb63,0xb800-0xb80f mem 0xfebfa000-0xfebfafff irq 22 at device 8.0 on pci0 atapci3: port 0x8c00-0x8c07,0x8800-0x8803,0x8400-0x8407,0x8000-0x8003,0x7c00-0x7c0f mem 0xfe9fe000-0xfe9fffff irq 17 at device 0.0 on pci3 atapci4: port 0x6c00-0x6c7f mem 0xfe6ff000-0xfe6ff07f,0xfe6f8000-0xfe6fbfff irq 16 at device 0.0 on pci4 atapci5: port 0x4c00-0x4c07,0x4800-0x4803,0x4400-0x4407,0x4000-0x4003,0x3c00-0x3c0f mem 0xfe3fe000-0xfe3fffff irq 18 at device 0.0 on pci6 The nForce pata controller doesn't list an irq, seems odd?