From owner-freebsd-performance@FreeBSD.ORG Wed Sep 30 04:24:43 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2C18F1065672 for ; Wed, 30 Sep 2009 04:24:43 +0000 (UTC) (envelope-from freebsd@sopwith.solgatos.com) Received: from sopwith.solgatos.com (pool-173-50-131-36.ptldor.fios.verizon.net [173.50.131.36]) by mx1.freebsd.org (Postfix) with ESMTP id 29F628FC1D for ; Wed, 30 Sep 2009 04:24:42 +0000 (UTC) Received: by sopwith.solgatos.com (Postfix, from userid 66) id D1C39B650; Mon, 28 Sep 2009 18:32:28 -0700 (PDT) Received: from localhost by sopwith.solgatos.com (8.8.8/6.24) id CAA29195; Wed, 30 Sep 2009 02:26:14 GMT Message-Id: <200909300226.CAA29195@sopwith.solgatos.com> To: freebsd-performance@freebsd.org In-reply-to: Your message of "Tue, 29 Sep 2009 12:42:13 EDT." Date: Tue, 29 Sep 2009 19:26:14 PDT From: Dieter Subject: A specific example of a disk i/o problem (was: FreeBSD vs Ubuntu) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Sep 2009 04:24:43 -0000 > > My question is why is FreeBSD's disk i/o performance so bad? > > As I mentioned... this was discussed actively in slashdot. You will find > there many good comments on this. All I saw in slashdot was a ffs vs ext comment. I don't believe the problems I'm seeing are filesystem related. > > Not just in the benchmarks with debugging on, but in real world usage > > where it actually matters. > > Are you saying this from actual experience or from reading other people's > comments? Here is a specific demo of one disk i/o problem I'm seeing. Should be easy to reproduce? http://lists.freebsd.org/pipermail/freebsd-performance/2008-July/003533.html This was over a year ago, so add 7.1 to the list of versions with the problem. I believe that the swap_pager: indefinite wait buffer: bufobj: 0, blkno: 1148109, size: 4096 messages I'm getting are the same problem. A user process is hogging the bottleneck (disk buffer cache?) and the swapper/pager is getting starved. I frequently see problems where disk i/o on one disk starves a process that needs disk i/o on a different disk on a different controller, which is why I suspect the disk buffer cache as the bottleneck. > If it is from actual experience and XYZ version of Linux does a > particular job better then I don't see why you should not consider using > what works best. I was stuck running Linux on one machine for awhile and it scrambled my data. No thank you. Data integrity is essential. Thankfully I have been penguin free for awhile now.