From owner-freebsd-current@FreeBSD.ORG Mon Dec 19 20:36:51 2011 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AC1F71065689 for ; Mon, 19 Dec 2011 20:36:51 +0000 (UTC) (envelope-from se@freebsd.org) Received: from nm35-vm0.bullet.mail.bf1.yahoo.com (nm35-vm0.bullet.mail.bf1.yahoo.com [72.30.238.72]) by mx1.freebsd.org (Postfix) with SMTP id 4637A8FC1E for ; Mon, 19 Dec 2011 20:36:50 +0000 (UTC) Received: from [98.139.212.148] by nm35.bullet.mail.bf1.yahoo.com with NNFMP; 19 Dec 2011 20:36:50 -0000 Received: from [98.139.211.200] by tm5.bullet.mail.bf1.yahoo.com with NNFMP; 19 Dec 2011 20:36:50 -0000 Received: from [127.0.0.1] by smtp209.mail.bf1.yahoo.com with NNFMP; 19 Dec 2011 20:36:50 -0000 X-Yahoo-Newman-Id: 402535.36399.bm@smtp209.mail.bf1.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: oDWwmAoVM1lwQB8CY4o9kk_W3Ay050_TjuSnOEAENw361qb ery.MflUvfYgz7g7CerI2ZZa3632Ah6i1PXfSwe.i_haRlnJqQ7UUHb8T_ic I7iboDchrpgYCGY4tiOdQ6hJke9U4dvTiuUu_1VcohLyZCHvAwpJWXLBR1Vf KbvjTyeiLMve_GB7EF3whdFibxNEjolp.8DflUhSUMgqRvi5bTNZZmhqRrJK nQqJE8ktNg6QFpZtHgjK1PtWdlwweHKJykgQ8Z1kxG2dCBb9DT_1.V21raLW 1Z1KuSPN0NfRAm2C8HKxCrCloYXiwhaYKKfdAlOxBSFBrqr5OdVHwvKIbwJw yjSDUYP61D2mS4_nQZm54NXy4fvqTiivddIstfSZJ6PyD7ilntIYAA_OIYiJ I64NuoekpLYht8i4- X-Yahoo-SMTP: iDf2N9.swBDAhYEh7VHfpgq0lnq. Received: from [192.168.119.20] (se@81.173.146.234 with plain) by smtp209.mail.bf1.yahoo.com with SMTP; 19 Dec 2011 12:36:49 -0800 PST Message-ID: <4EEFA05E.7090507@freebsd.org> Date: Mon, 19 Dec 2011 21:36:46 +0100 From: Stefan Esser User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:8.0) Gecko/20111105 Thunderbird/8.0 MIME-Version: 1.0 To: Dan Nelson References: <4EEF488E.1030904@freebsd.org> <20111219162220.GK53453@dan.emsphone.com> In-Reply-To: <20111219162220.GK53453@dan.emsphone.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: FreeBSD Current Subject: Re: Uneven load on drives in ZFS RAIDZ1 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Dec 2011 20:36:51 -0000 Am 19.12.2011 17:22, schrieb Dan Nelson: > In the last episode (Dec 19), Stefan Esser said: >> for quite some time I have observed an uneven distribution of load between >> drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of a longer >> log of 10 second averages logged with gstat: >> >> dT: 10.001s w: 10.000s filter: ^a?da?.$ >> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name >> 0 130 106 4134 4.5 23 1033 5.2 48.8| ada0 >> 0 131 111 3784 4.2 19 1007 4.0 47.6| ada1 >> 0 90 66 2219 4.5 24 1031 5.1 31.7| ada2 >> 1 81 58 2007 4.6 22 1023 2.3 28.1| ada3 > [...] >> zpool status -v >> pool: raid1 >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> raid1 ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> ada0p2 ONLINE 0 0 0 >> ada1p2 ONLINE 0 0 0 >> ada2p2 ONLINE 0 0 0 >> ada3p2 ONLINE 0 0 0 > > Any read from your raidz device will hit three disks (the checksum is > applied across the stripe, not on each block, so a full stripe is always > read) so I think your extra IOs are coming from somewhere else. > > What's on p1 on these disks? Could that be the cause of your extra I/Os? > Does "zpool iostat -v 10" give you even numbers across all disks? This is a ZFS only system. The first partition on each drive holds just the gptzfsloader. pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- raid1 4.41T 2.21T 139 72 12.3M 818K raidz1 4.41T 2.21T 139 72 12.3M 818K ada0p2 - - 114 17 4.24M 332K ada1p2 - - 106 15 3.82M 305K ada2p2 - - 65 20 2.09M 337K ada3p2 - - 58 18 2.18M 329K capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- raid1 4.41T 2.21T 150 45 12.8M 751K raidz1 4.41T 2.21T 150 45 12.8M 751K ada0p2 - - 113 14 4.34M 294K ada1p2 - - 111 14 3.94M 277K ada2p2 - - 62 16 2.23M 294K ada3p2 - - 68 14 2.32M 277K ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- raid1 4.41T 2.21T 157 86 12.3M 6.41M raidz1 4.41T 2.21T 157 86 12.3M 6.41M ada0p2 - - 119 39 4.21M 2.24M ada1p2 - - 106 31 3.78M 2.21M ada2p2 - - 81 59 2.23M 2.23M ada3p2 - - 57 39 2.06M 2.22M ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- raid1 4.41T 2.21T 187 45 14.2M 1.04M raidz1 4.41T 2.21T 187 45 14.2M 1.04M ada0p2 - - 117 13 4.27M 398K ada1p2 - - 120 12 4.01M 384K ada2p2 - - 89 12 2.97M 403K ada3p2 - - 85 13 2.91M 386K ---------- ----- ----- ----- ----- ----- ----- The same difference of read operations per second as shown by gstat ... Regards, STefan