From owner-freebsd-fs@freebsd.org Tue Mar 8 03:50:05 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E8671AC729E; Tue, 8 Mar 2016 03:50:05 +0000 (UTC) (envelope-from fred.fliu@gmail.com) Received: from mail-lb0-x234.google.com (mail-lb0-x234.google.com [IPv6:2a00:1450:4010:c04::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 638D6226; Tue, 8 Mar 2016 03:50:05 +0000 (UTC) (envelope-from fred.fliu@gmail.com) Received: by mail-lb0-x234.google.com with SMTP id k15so3514826lbg.0; Mon, 07 Mar 2016 19:50:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-transfer-encoding; bh=AoYsfHa27fdhsl2RmI5NGmoUqHyo9gJQVgjAupcGz9s=; b=SmDzHUhxcsfiPWKlfWhX7hALhgwDwMQI+5J7yn2ls4eRe+pQYpfhNZHLnad6gp6w4m 9hUBSpC7uGb455smvxMcLhIHrcXMoAJbds8DZrbuYYWJ6tYGmIVfQ79JgBqlO9p3RUsa Dd/6AjIKSNG7hMZo4PK01/aCTftzDJ0rGGRHk5st46MwMIBZTWVGj88u9mbrOCxvVmgt yoP750zTlqYPmT9imQeMVQB9mN52jrj/RgxxXkr9RMdI84mBXNXvuUUQ/gLMQhHvsz2U YGdbjJJ11qNDh24BXXboMAOBmxM6ulYfrys0YkV3zVHO7zANhilgNtwDS40iqybnoA74 /iTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-transfer-encoding; bh=AoYsfHa27fdhsl2RmI5NGmoUqHyo9gJQVgjAupcGz9s=; b=fZWHiH7eSwSQWQlVVjGPI1GWUU73o6UqxmcjeAlP6GHizg+xeJWqnBuVcNEbQ00Wuo bj/7v0evQ5a9Q07Tn+o8UzZzpdbt6kK4q7Qp8inFU5svP9gRldO2BQdOPimd0xzQWEq6 qHLYaodODtYWweCeTgw3SVE7K5ZIzhRjWie4qhAZ1QfM/nEJ6Hsm/Npii5EdgSno4gGL 8AriGs08TlYFqgH1+n3YEPovHLDIB4q1Cb1Qcd4fFS6qQWdfuTbyO49OuhdYjOftQ6YR g4ySEUTPCuxTj50eCB8aV/Uml7Jfr9Ecela/tST4lJCbCbNhnZiAkAyQyDsNpR3VddLv 7k6w== X-Gm-Message-State: AD7BkJIRYiDViWsa2bhIob0AXIs8cZut4C1nOmLW4VQAVSHQkVWvLqR8gL9XelW0Nvvjbsv6SgDieV61T7Tmmg== MIME-Version: 1.0 X-Received: by 10.112.162.231 with SMTP id yd7mr8855063lbb.40.1457409001735; Mon, 07 Mar 2016 19:50:01 -0800 (PST) Received: by 10.25.20.164 with HTTP; Mon, 7 Mar 2016 19:50:01 -0800 (PST) In-Reply-To: <56DD17CA.90200@freebsd.org> References: <95563acb-d27b-4d4b-b8f3-afeb87a3d599@me.com> <56D87784.4090103@broken.net> <56DD17CA.90200@freebsd.org> Date: Tue, 8 Mar 2016 11:50:01 +0800 Message-ID: Subject: Re: [zfs] an interesting survey -- the zpool with most disks you have ever built From: Fred Liu To: Julian Elischer Cc: illumos-zfs , Discussion list for OpenIndiana , omnios-discuss , developer , "zfs-devel@freebsd.org" , illumos-developer , "freebsd-fs@FreeBSD.org" , "smartos-discuss@lists.smartos.org" , "zfs-discuss@list.zfsonlinux.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Mailman-Approved-At: Tue, 08 Mar 2016 05:02:46 +0000 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Mar 2016 03:50:06 -0000 2016-03-07 13:55 GMT+08:00 Julian Elischer : > On 6/03/2016 9:30 PM, Fred Liu wrote: >> >> 2016-03-05 0:01 GMT+08:00 Freddie Cash : >> >>> On Mar 4, 2016 2:05 AM, "Fred Liu" wrote: >>>> >>>> 2016-03-04 13:47 GMT+08:00 Freddie Cash : >>>>> >>>>> Currently, I just use a simple coordinate system. Columns are letters= , >>> >>> rows are numbers. >>>>> >>>>> "smartos-discuss@lists.smartos.org" >>> >>>> =E3=80=81 >>> >>> developer =E3=80=81 >>> >>> illumos-developer =E3=80=81 >>> >>> omnios-discuss =E3=80=81 >>> >>> Discussion list for OpenIndiana = =E3=80=81 >>> >>> illumos-zfs =E3=80=81 >>> >>> "zfs-discuss@list.zfsonlinux.org" =E3= =80=81 >>> >>> "freebsd-fs@FreeBSD.org" =E3=80=81 >>> >>> "zfs-devel@freebsd.org" >>> >>>>> Each disk is partitioned using GPT with the first (only) partition >>> >>> starting at 1 MB and covering the whole disk, and labelled with the >>> column/row where it is located (disk-a1, disk-g6, disk-p3, etc). >>>> >>>> [Fred]: So you manually pull off all the drives one by one to locate >>> >>> them? >>> >>> When putting the system together for the first time, I insert each disk >>> one at a time, wait for it to be detected, partition it, then label it >>> based on physical location. Then do the next one. It's just part of t= he >>> normal server build process, whether it has 2 drives, 20 drives, or 200 >>> drives. >>> >>> We build all our own servers from off-the-shelf parts; we don't buy >>> anything pre-built from any of the large OEMs. >>> >> [Fred]: Gotcha! >> >> >>>>> The pool is created using the GPT labels, so the label shows in "zpoo= l >>> >>> list" output. >>>> >>>> [Fred]: What will the output look like? >>> >>> From our smaller backups server, with just 24 drive bays: >>> >>> $ zpool status storage >>> >>> pool: storage >>> >>> state: ONLINE >>> >>> status: Some supported features are not enabled on the pool. The pool c= an >>> >>> still be used, but some features are unavailable. >>> >>> action: Enable all features using 'zpool upgrade'. Once this is done, >>> >>> the pool may no longer be accessible by software that does not support >>> >>> the features. See zpool-features(7) for details. >>> >>> scan: scrub canceled on Wed Feb 17 12:02:20 2016 >>> >>> config: >>> >>> >>> NAME STATE READ WRITE CKSUM >>> >>> storage ONLINE 0 0 0 >>> >>> raidz2-0 ONLINE 0 0 0 >>> >>> gpt/disk-a1 ONLINE 0 0 0 >>> >>> gpt/disk-a2 ONLINE 0 0 0 >>> >>> gpt/disk-a3 ONLINE 0 0 0 >>> >>> gpt/disk-a4 ONLINE 0 0 0 >>> >>> gpt/disk-a5 ONLINE 0 0 0 >>> >>> gpt/disk-a6 ONLINE 0 0 0 >>> >>> raidz2-1 ONLINE 0 0 0 >>> >>> gpt/disk-b1 ONLINE 0 0 0 >>> >>> gpt/disk-b2 ONLINE 0 0 0 >>> >>> gpt/disk-b3 ONLINE 0 0 0 >>> >>> gpt/disk-b4 ONLINE 0 0 0 >>> >>> gpt/disk-b5 ONLINE 0 0 0 >>> >>> gpt/disk-b6 ONLINE 0 0 0 >>> >>> raidz2-2 ONLINE 0 0 0 >>> >>> gpt/disk-c1 ONLINE 0 0 0 >>> >>> gpt/disk-c2 ONLINE 0 0 0 >>> >>> gpt/disk-c3 ONLINE 0 0 0 >>> >>> gpt/disk-c4 ONLINE 0 0 0 >>> >>> gpt/disk-c5 ONLINE 0 0 0 >>> >>> gpt/disk-c6 ONLINE 0 0 0 >>> >>> raidz2-3 ONLINE 0 0 0 >>> >>> gpt/disk-d1 ONLINE 0 0 0 >>> >>> gpt/disk-d2 ONLINE 0 0 0 >>> >>> gpt/disk-d3 ONLINE 0 0 0 >>> >>> gpt/disk-d4 ONLINE 0 0 0 >>> >>> gpt/disk-d5 ONLINE 0 0 0 >>> >>> gpt/disk-d6 ONLINE 0 0 0 >>> >>> cache >>> >>> gpt/cache0 ONLINE 0 0 0 >>> >>> gpt/cache1 ONLINE 0 0 0 >>> >>> >>> errors: No known data errors >>> >>> The 90-bay systems look the same, just that the letters go all the way = to >>> p (so disk-p1 through disk-p6). And there's one vdev that uses 3 drive= s >>> from each chassis (7x 6-disk vdev only uses 42 drives of the 45-bay >>> chassis, so there's lots of spares if using a single chassis; using two >>> chassis, there's enough drives to add an extra 6-disk vdev). >>> >> [Fred]: It looks like the gpt label shown in "zpool status" only works i= n >> FreeBSD/FreeNAS. Are you using FreeBSD/FreeNAS? I can't find the similar >> possibilities in Illumos/Linux. > > > Ah that's a trick.. FreeBSD exports an actual /dev/gpt/{you-label-goes-he= re} > for each labeled partition it finds. > So it's not ZFS doing anything special.. it's what FreeBSD is calling the > partition. > Super cool! Fred