From owner-freebsd-performance@FreeBSD.ORG Tue Sep 14 12:10:49 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 360751065670 for ; Tue, 14 Sep 2010 12:10:49 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id E5D5D8FC18 for ; Tue, 14 Sep 2010 12:10:48 +0000 (UTC) Received: by vws7 with SMTP id 7so6510596vws.13 for ; Tue, 14 Sep 2010 05:10:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=2/l9bCnQbiQcoANS3gHA9TJZa0NeGqNphRV9aXRnrP0=; b=dye+vr0Emm76gT+Zzw82N09Ej7qUYaEknyrUeU+YLVjAFZ815daquQgNrBM/Sci32Z 3Gdbi6Md028L7yTst2kYcdskFPqxmjILJvIOW7LtUm6/6totorHslaaUIebDqWxckGlt 5owGFIwrtZzj8MU5X+8xp9RirK5IxTpBne/34= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=EyPuOdrwDz+t1kZJAls90aaa5bfZu4G5IBZSbkq1I/N0XZAH3V1MlRY/v2wMti7BG7 k+CCsHF6+NiDFy+uRmg5lKM6bVR1rULWxHIQMSAts1vcXY7u6hXhcwaMbDua7eG9mj2k FpkIeYbeiMZFSborGJEJEOBXjkXAyr1W6eygg= MIME-Version: 1.0 Received: by 10.220.62.206 with SMTP id y14mr3254806vch.250.1284464905340; Tue, 14 Sep 2010 04:48:25 -0700 (PDT) Received: by 10.220.202.197 with HTTP; Tue, 14 Sep 2010 04:48:25 -0700 (PDT) Date: Tue, 14 Sep 2010 07:48:25 -0400 Message-ID: From: grarpamp To: freebsd-performance@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Mailman-Approved-At: Tue, 14 Sep 2010 13:25:18 +0000 Subject: Sequential disk IO saturates system X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Sep 2010 12:10:49 -0000 We have [re]nice to deal with user processes. Is there no way to effectively rate limit the disk pipe? As it is now, this machine can't do any userland work because it's completely buried by the simple degenerate case of: cp /fs_a/.../giga_size_files /fs_b/... Geli and zfs are in use, yet that doesn't seem to be an excuse for this behavior. I can read 60MB/s off the raw spindles without much issue. Yet add geli and I get like 15MB/s, which is completely fine as well, except the box gets swamped in system time when doing that. And around 11MB/s off geli+zfs, caveat above swamping of course. And although they perform at about the same MB/s rates, it's the bulk writes that seem to thoroughly dispatch the system, far more than the reads do. This one really hurts and removes all usability. Sure, maybe one could set some ancient PIO mode on the [s]ata/scsi channels [untested here]. But it seems far less than ideal as users commonly mix raw and geli+zfs partitions on the same set of spindles. Is there a description of the underlying issue available? And unless I'm missing[?] something like an already existing insertable geom rate limit, or a way to renice kernel processes... is it right to say that FreeBSD needs these options and/or some equivalent work in this area? As I'm without an empty raw disk right now, I can only write to zfs and thus still have yet to test with writes to spindle and geli. Regardless, perhaps the proper solution lies with the right sort of future knob as in the previous paragraph? From owner-freebsd-performance@FreeBSD.ORG Tue Sep 14 17:02:50 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7886A1065673 for ; Tue, 14 Sep 2010 17:02:50 +0000 (UTC) (envelope-from bsd@vink.pl) Received: from mail-qy0-f182.google.com (mail-qy0-f182.google.com [209.85.216.182]) by mx1.freebsd.org (Postfix) with ESMTP id 31EBA8FC0A for ; Tue, 14 Sep 2010 17:02:49 +0000 (UTC) Received: by qyk4 with SMTP id 4so6977901qyk.13 for ; Tue, 14 Sep 2010 10:02:49 -0700 (PDT) Received: by 10.224.73.134 with SMTP id q6mr161323qaj.40.1284481931629; Tue, 14 Sep 2010 09:32:11 -0700 (PDT) Received: from mail-qy0-f182.google.com (mail-qy0-f182.google.com [209.85.216.182]) by mx.google.com with ESMTPS id r1sm205236qcq.22.2010.09.14.09.32.11 (version=SSLv3 cipher=RC4-MD5); Tue, 14 Sep 2010 09:32:11 -0700 (PDT) Received: by qyk4 with SMTP id 4so6945557qyk.13 for ; Tue, 14 Sep 2010 09:32:10 -0700 (PDT) MIME-Version: 1.0 Received: by 10.224.65.234 with SMTP id k42mr131218qai.127.1284481928140; Tue, 14 Sep 2010 09:32:08 -0700 (PDT) Received: by 10.229.91.81 with HTTP; Tue, 14 Sep 2010 09:32:08 -0700 (PDT) In-Reply-To: References: Date: Tue, 14 Sep 2010 18:32:08 +0200 Message-ID: From: Wiktor Niesiobedzki To: grarpamp Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-performance@freebsd.org Subject: Re: Sequential disk IO saturates system X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Sep 2010 17:02:50 -0000 Hi, You may try to play with kern.sched.preempt_thresh setting (as per http://docs.freebsd.org/cgi/getmsg.cgi?fetch=3D665455+0+archive/2010/freebs= d-stable/20100905.freebsd-stable). Renice'ing the process doesn't give any improvement, as this is g_eli* thread that is consuming your CPU, which has pretty high priority. Since my last update, I don't see that much of the problem, but previously, dd if=3D/dev/gzero.eli of=3D/dev/null bs=3D1M, could cause CPU starvation of any other processes. Now that don't happen anymore (though I see some performance drops during txg commits, eg. in network throughput) I've also changed vfs.zfs.txg.synctime to 1 second (default - 5 seconds), so txg commits are shorter, though more often. This help alleviate my problems. YMMV. Cheers, Wiktor Niesiobedzki 2010/9/14 grarpamp : > We have [re]nice to deal with user processes. > > Is there no way to effectively rate limit the disk pipe? As it is > now, this machine can't do any userland work because it's completely > buried by the simple degenerate case of: > =C2=A0cp /fs_a/.../giga_size_files /fs_b/... > > Geli and zfs are in use, yet that doesn't seem to be an excuse for > this behavior. > > I can read 60MB/s off the raw spindles without much issue. > > Yet add geli and I get like 15MB/s, which is completely fine as > well, except the box gets swamped in system time when doing that. > And around 11MB/s off geli+zfs, caveat above swamping of course. > > And although they perform at about the same MB/s rates, it's the > bulk writes that seem to thoroughly dispatch the system, far more > than the reads do. This one really hurts and removes all usability. > > Sure, maybe one could set some ancient PIO mode on the [s]ata/scsi > channels [untested here]. But it seems far less than ideal as users > commonly mix raw and geli+zfs partitions on the same set of spindles. > > Is there a description of the underlying issue available? > > And unless I'm missing[?] something like an already existing insertable > geom rate limit, or a way to renice kernel processes... =C2=A0is it right > to say that FreeBSD needs these options and/or some equivalent work > in this area? > > As I'm without an empty raw disk right now, I can only write to zfs > and thus still have yet to test with writes to spindle and geli. > Regardless, perhaps the proper solution lies with the right sort > of future knob as in the previous paragraph? > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd= .org" >