Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 7 Mar 2016 13:30:18 +0800
From:      Fred Liu <fred.fliu@gmail.com>
To:        illumos-zfs <zfs@lists.illumos.org>
Cc:        "smartos-discuss@lists.smartos.org" <smartos-discuss@lists.smartos.org>, developer <developer@open-zfs.org>,  illumos-developer <developer@lists.illumos.org>,  omnios-discuss <omnios-discuss@lists.omniti.com>,  Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>,  "zfs-discuss@list.zfsonlinux.org" <zfs-discuss@list.zfsonlinux.org>,  "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>, "zfs-devel@freebsd.org" <zfs-devel@freebsd.org>
Subject:   Re: [zfs] an interesting survey -- the zpool with most disks you have ever built
Message-ID:  <CALi05Xyy3voKVHTR=bHSG5JszQBW4NC0=XL_C-YTQdwzBPwnag@mail.gmail.com>
In-Reply-To: <CAOjFWZ6YvtpBf2J9F6OTGLh0UfRuBxiY6iF-gNFNAhv=QCB7YQ@mail.gmail.com>
References:  <95563acb-d27b-4d4b-b8f3-afeb87a3d599@me.com> <CACTb9pxJqk__DPN_pDy4xPvd6ETZtbF9y=B8U7RaeGnn0tKAVQ@mail.gmail.com> <CAJjvXiH9Wh%2BYKngTvv0XG1HtikWggBDwjr_MCb8=Rf276DZO-Q@mail.gmail.com> <56D87784.4090103@broken.net> <A5A6EA4AE9DCC44F8E7FCB4D6317B1D203178F1DD392@SH-MAIL.ISSI.COM> <CAOjFWZ5YcaAx-v5ZqsoFnHFB1jnvstpXpGObcfewMx75WU0TeQ@mail.gmail.com> <A5A6EA4AE9DCC44F8E7FCB4D6317B1D203178F1DD39E@SH-MAIL.ISSI.COM> <CAOjFWZ7E-LTvUy60UTe2Yi2Egw6%2BbrKZx3r70UbtJJ9haNL5Hg@mail.gmail.com> <CALi05Xwc3dKTsyuaSLeVQSptMp537XeLxXf6Pj%2B15jRtXKXCfA@mail.gmail.com> <CAOjFWZ6YvtpBf2J9F6OTGLh0UfRuBxiY6iF-gNFNAhv=QCB7YQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
2016-03-05 0:01 GMT+08:00 Freddie Cash <fjwcash@gmail.com>:

> On Mar 4, 2016 2:05 AM, "Fred Liu" <fred.fliu@gmail.com> wrote:
> > 2016-03-04 13:47 GMT+08:00 Freddie Cash <fjwcash@gmail.com>:
> >>
> >> Currently, I just use a simple coordinate system. Columns are letters,
> rows are numbers.
> >> "smartos-discuss@lists.smartos.org" <smartos-discuss@lists.smartos.org
> >=E3=80=81
>
> developer <developer@open-zfs.org>=E3=80=81
>
> illumos-developer <developer@lists.illumos.org>=E3=80=81
>
> omnios-discuss <omnios-discuss@lists.omniti.com>=E3=80=81
>
> Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>=E3=
=80=81
>
> illumos-zfs <zfs@lists.illumos.org>=E3=80=81
>
> "zfs-discuss@list.zfsonlinux.org" <zfs-discuss@list.zfsonlinux.org>=E3=80=
=81
>
> "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>=E3=80=81
>
> "zfs-devel@freebsd.org" <zfs-devel@freebsd.org>
>
> >> Each disk is partitioned using GPT with the first (only) partition
> starting at 1 MB and covering the whole disk, and labelled with the
> column/row where it is located (disk-a1, disk-g6, disk-p3, etc).
> >
> > [Fred]: So you manually pull off all the drives one by one to locate
> them?
>
> =E2=80=8BWhen putting the system together for the first time, I insert ea=
ch disk
> one at a time, wait for it to be detected, partition it, then label it
> based on physical location.=E2=80=8B  Then do the next one.  It's just pa=
rt of the
> normal server build process, whether it has 2 drives, 20 drives, or 200
> drives.
>
> =E2=80=8BWe build all our own servers from off-the-shelf parts; we don't =
buy
> anything pre-built from any of the large OEMs.=E2=80=8B
>

[Fred]: Gotcha!


> >> The pool is created using the GPT labels, so the label shows in "zpool
> list" output.
> >
> > [Fred]:  What will the output look like?
>
> =E2=80=8BFrom our smaller backups server, with just 24 drive bays:
>
> $ zpool status storage
>
>   pool: storage
>
>  state: ONLINE
>
> status: Some supported features are not enabled on the pool. The pool can
>
> still be used, but some features are unavailable.
>
> action: Enable all features using 'zpool upgrade'. Once this is done,
>
> the pool may no longer be accessible by software that does not support
>
> the features. See zpool-features(7) for details.
>
>   scan: scrub canceled on Wed Feb 17 12:02:20 2016
>
> config:
>
>
> NAME             STATE     READ WRITE CKSUM
>
> storage          ONLINE       0     0     0
>
>  raidz2-0       ONLINE       0     0     0
>
>    gpt/disk-a1  ONLINE       0     0     0
>
>    gpt/disk-a2  ONLINE       0     0     0
>
>    gpt/disk-a3  ONLINE       0     0     0
>
>    gpt/disk-a4  ONLINE       0     0     0
>
>    gpt/disk-a5  ONLINE       0     0     0
>
>    gpt/disk-a6  ONLINE       0     0     0
>
>  raidz2-1       ONLINE       0     0     0
>
>    gpt/disk-b1  ONLINE       0     0     0
>
>    gpt/disk-b2  ONLINE       0     0     0
>
>    gpt/disk-b3  ONLINE       0     0     0
>
>    gpt/disk-b4  ONLINE       0     0     0
>
>    gpt/disk-b5  ONLINE       0     0     0
>
>    gpt/disk-b6  ONLINE       0     0     0
>
>  raidz2-2       ONLINE       0     0     0
>
>    gpt/disk-c1  ONLINE       0     0     0
>
>    gpt/disk-c2  ONLINE       0     0     0
>
>    gpt/disk-c3  ONLINE       0     0     0
>
>    gpt/disk-c4  ONLINE       0     0     0
>
>    gpt/disk-c5  ONLINE       0     0     0
>
>    gpt/disk-c6  ONLINE       0     0     0
>
>  raidz2-3       ONLINE       0     0     0
>
>    gpt/disk-d1  ONLINE       0     0     0
>
>    gpt/disk-d2  ONLINE       0     0     0
>
>    gpt/disk-d3  ONLINE       0     0     0
>
>    gpt/disk-d4  ONLINE       0     0     0
>
>    gpt/disk-d5  ONLINE       0     0     0
>
>    gpt/disk-d6  ONLINE       0     0     0
>
> cache
>
>  gpt/cache0     ONLINE       0     0     0
>
>  gpt/cache1     ONLINE       0     0     0
>
>
> errors: No known data errors
>
> The 90-bay systems look the same, just that the letters go all the way to
> p (so disk-p1 through disk-p6).  And there's one vdev that uses 3 drives
> from each chassis (7x 6-disk vdev only uses 42 drives of the 45-bay
> chassis, so there's lots of spares if using a single chassis; using two
> chassis, there's enough drives to add an extra 6-disk vdev).
>

[Fred]: It looks like the gpt label shown in "zpool status" only works in
FreeBSD/FreeNAS. Are you using FreeBSD/FreeNAS? I can't find the similar
possibilities in Illumos/Linux.

Thanks,

Fred

>
> *illumos-zfs* | Archives
> <https://www.listbox.com/member/archive/182191/=3Dnow>;
> <https://www.listbox.com/member/archive/rss/182191/22147814-d504851f>; |
> Modify
> <https://www.listbox.com/member/?member_id=3D22147814&id_secret=3D2214781=
4-a72bcb8a>
> Your Subscription <http://www.listbox.com>;
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALi05Xyy3voKVHTR=bHSG5JszQBW4NC0=XL_C-YTQdwzBPwnag>