From owner-freebsd-fs@FreeBSD.ORG Tue May 12 14:19:26 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 692DB7CC for ; Tue, 12 May 2015 14:19:26 +0000 (UTC) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) by mx1.freebsd.org (Postfix) with ESMTP id 27C2B1318 for ; Tue, 12 May 2015 14:19:25 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id 0F7BF147200B for ; Tue, 12 May 2015 16:09:23 +0200 (CEST) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id y2keXclwAkhz for ; Tue, 12 May 2015 16:09:21 +0200 (CEST) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 01FDD1472009 for ; Tue, 12 May 2015 16:09:20 +0200 (CEST) Message-ID: <555209AA.9010606@internetx.com> Date: Tue, 12 May 2015 16:09:46 +0200 From: InterNetX - Juergen Gotteswinter Reply-To: jg@internetx.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS RAID 10 capacity expansion and uneven data distribution References: <5552071A.40709@free.de> In-Reply-To: <5552071A.40709@free.de> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 May 2015 14:19:26 -0000 Am 12.05.2015 um 15:58 schrieb Kai Gallasch: > Hello list. > > What is the preferred way to expand a mirrored or RAID 10 zpool with > additional mirror pairs? > > On one server I am currently using a four disk RAID 10 zpool: > > zpool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > gpt/zpool-da2 ONLINE 0 0 0 > gpt/zpool-da3 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > gpt/zpool-da4 ONLINE 0 0 0 > gpt/zpool-da5 ONLINE 0 0 0 > > Originally the pool consisted of only one mirror (zpool-da2 and zpool-da3) > > I then used "zpool add" to add mirror-1 to the pool > > Directly afer adding the new mirror I had all old data physically > sitting on the old mirror and no data on the new disks. yep, this is the expected result > > So there is much imbalance in the data distribution across the RAID 10. > The effect is now, that the IOPS are not evently distributed about all > devs of the pool and e.g. "gstat -p" when the server is very busy > showed, that the old mirror pair can max out at 100% I/O usage while the > other one is almost idle. right, works as designed > > Also: I also noted that the old mirror-pair had a FRAG about 50%, while > the new one only has 3%. > same here > So is it generally not a good idea to expand a mirrored pool or RAID 10 > pool with new mirror pairs? > depends > Or by which procedure can the existing data in the pool be evenly > distributed about all devices inside the pool? > destroy / recreate > Any hint appreciated. > > Regards, > Kai. >