Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 6 Mar 2016 21:55:22 -0800
From:      Julian Elischer <julian@freebsd.org>
To:        Fred Liu <fred.fliu@gmail.com>, illumos-zfs <zfs@lists.illumos.org>
Cc:        Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>, omnios-discuss <omnios-discuss@lists.omniti.com>, developer <developer@open-zfs.org>, "zfs-devel@freebsd.org" <zfs-devel@freebsd.org>, illumos-developer <developer@lists.illumos.org>, "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>, "smartos-discuss@lists.smartos.org" <smartos-discuss@lists.smartos.org>, "zfs-discuss@list.zfsonlinux.org" <zfs-discuss@list.zfsonlinux.org>
Subject:   Re: [zfs] an interesting survey -- the zpool with most disks you have ever built
Message-ID:  <56DD17CA.90200@freebsd.org>
In-Reply-To: <CALi05Xyy3voKVHTR=bHSG5JszQBW4NC0=XL_C-YTQdwzBPwnag@mail.gmail.com>
References:  <95563acb-d27b-4d4b-b8f3-afeb87a3d599@me.com> <CACTb9pxJqk__DPN_pDy4xPvd6ETZtbF9y=B8U7RaeGnn0tKAVQ@mail.gmail.com> <CAJjvXiH9Wh%2BYKngTvv0XG1HtikWggBDwjr_MCb8=Rf276DZO-Q@mail.gmail.com> <56D87784.4090103@broken.net> <A5A6EA4AE9DCC44F8E7FCB4D6317B1D203178F1DD392@SH-MAIL.ISSI.COM> <CAOjFWZ5YcaAx-v5ZqsoFnHFB1jnvstpXpGObcfewMx75WU0TeQ@mail.gmail.com> <A5A6EA4AE9DCC44F8E7FCB4D6317B1D203178F1DD39E@SH-MAIL.ISSI.COM> <CAOjFWZ7E-LTvUy60UTe2Yi2Egw6%2BbrKZx3r70UbtJJ9haNL5Hg@mail.gmail.com> <CALi05Xwc3dKTsyuaSLeVQSptMp537XeLxXf6Pj%2B15jRtXKXCfA@mail.gmail.com> <CAOjFWZ6YvtpBf2J9F6OTGLh0UfRuBxiY6iF-gNFNAhv=QCB7YQ@mail.gmail.com> <CALi05Xyy3voKVHTR=bHSG5JszQBW4NC0=XL_C-YTQdwzBPwnag@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 6/03/2016 9:30 PM, Fred Liu wrote:
> 2016-03-05 0:01 GMT+08:00 Freddie Cash <fjwcash@gmail.com>:
>
>> On Mar 4, 2016 2:05 AM, "Fred Liu" <fred.fliu@gmail.com> wrote:
>>> 2016-03-04 13:47 GMT+08:00 Freddie Cash <fjwcash@gmail.com>:
>>>> Currently, I just use a simple coordinate system. Columns are letters,
>> rows are numbers.
>>>> "smartos-discuss@lists.smartos.org" <smartos-discuss@lists.smartos.org
>>> 、
>> developer <developer@open-zfs.org>、
>>
>> illumos-developer <developer@lists.illumos.org>、
>>
>> omnios-discuss <omnios-discuss@lists.omniti.com>、
>>
>> Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>、
>>
>> illumos-zfs <zfs@lists.illumos.org>、
>>
>> "zfs-discuss@list.zfsonlinux.org" <zfs-discuss@list.zfsonlinux.org>、
>>
>> "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>、
>>
>> "zfs-devel@freebsd.org" <zfs-devel@freebsd.org>
>>
>>>> Each disk is partitioned using GPT with the first (only) partition
>> starting at 1 MB and covering the whole disk, and labelled with the
>> column/row where it is located (disk-a1, disk-g6, disk-p3, etc).
>>> [Fred]: So you manually pull off all the drives one by one to locate
>> them?
>>
>> ​When putting the system together for the first time, I insert each disk
>> one at a time, wait for it to be detected, partition it, then label it
>> based on physical location.​  Then do the next one.  It's just part of the
>> normal server build process, whether it has 2 drives, 20 drives, or 200
>> drives.
>>
>> ​We build all our own servers from off-the-shelf parts; we don't buy
>> anything pre-built from any of the large OEMs.​
>>
> [Fred]: Gotcha!
>
>
>>>> The pool is created using the GPT labels, so the label shows in "zpool
>> list" output.
>>> [Fred]:  What will the output look like?
>> ​From our smaller backups server, with just 24 drive bays:
>>
>> $ zpool status storage
>>
>>    pool: storage
>>
>>   state: ONLINE
>>
>> status: Some supported features are not enabled on the pool. The pool can
>>
>> still be used, but some features are unavailable.
>>
>> action: Enable all features using 'zpool upgrade'. Once this is done,
>>
>> the pool may no longer be accessible by software that does not support
>>
>> the features. See zpool-features(7) for details.
>>
>>    scan: scrub canceled on Wed Feb 17 12:02:20 2016
>>
>> config:
>>
>>
>> NAME             STATE     READ WRITE CKSUM
>>
>> storage          ONLINE       0     0     0
>>
>>   raidz2-0       ONLINE       0     0     0
>>
>>     gpt/disk-a1  ONLINE       0     0     0
>>
>>     gpt/disk-a2  ONLINE       0     0     0
>>
>>     gpt/disk-a3  ONLINE       0     0     0
>>
>>     gpt/disk-a4  ONLINE       0     0     0
>>
>>     gpt/disk-a5  ONLINE       0     0     0
>>
>>     gpt/disk-a6  ONLINE       0     0     0
>>
>>   raidz2-1       ONLINE       0     0     0
>>
>>     gpt/disk-b1  ONLINE       0     0     0
>>
>>     gpt/disk-b2  ONLINE       0     0     0
>>
>>     gpt/disk-b3  ONLINE       0     0     0
>>
>>     gpt/disk-b4  ONLINE       0     0     0
>>
>>     gpt/disk-b5  ONLINE       0     0     0
>>
>>     gpt/disk-b6  ONLINE       0     0     0
>>
>>   raidz2-2       ONLINE       0     0     0
>>
>>     gpt/disk-c1  ONLINE       0     0     0
>>
>>     gpt/disk-c2  ONLINE       0     0     0
>>
>>     gpt/disk-c3  ONLINE       0     0     0
>>
>>     gpt/disk-c4  ONLINE       0     0     0
>>
>>     gpt/disk-c5  ONLINE       0     0     0
>>
>>     gpt/disk-c6  ONLINE       0     0     0
>>
>>   raidz2-3       ONLINE       0     0     0
>>
>>     gpt/disk-d1  ONLINE       0     0     0
>>
>>     gpt/disk-d2  ONLINE       0     0     0
>>
>>     gpt/disk-d3  ONLINE       0     0     0
>>
>>     gpt/disk-d4  ONLINE       0     0     0
>>
>>     gpt/disk-d5  ONLINE       0     0     0
>>
>>     gpt/disk-d6  ONLINE       0     0     0
>>
>> cache
>>
>>   gpt/cache0     ONLINE       0     0     0
>>
>>   gpt/cache1     ONLINE       0     0     0
>>
>>
>> errors: No known data errors
>>
>> The 90-bay systems look the same, just that the letters go all the way to
>> p (so disk-p1 through disk-p6).  And there's one vdev that uses 3 drives
>> from each chassis (7x 6-disk vdev only uses 42 drives of the 45-bay
>> chassis, so there's lots of spares if using a single chassis; using two
>> chassis, there's enough drives to add an extra 6-disk vdev).
>>
> [Fred]: It looks like the gpt label shown in "zpool status" only works in
> FreeBSD/FreeNAS. Are you using FreeBSD/FreeNAS? I can't find the similar
> possibilities in Illumos/Linux.

Ah that's a trick.. FreeBSD exports an actual 
/dev/gpt/{you-label-goes-here} for each labeled partition it finds.
So it's not ZFS doing anything special.. it's what FreeBSD is calling 
the partition.
>
> Thanks,
>
> Fred
>
>> *illumos-zfs* | Archives
>> <https://www.listbox.com/member/archive/182191/=now>;
>> <https://www.listbox.com/member/archive/rss/182191/22147814-d504851f>; |
>> Modify
>> <https://www.listbox.com/member/?member_id=22147814&id_secret=22147814-a72bcb8a>;
>> Your Subscription <http://www.listbox.com>;
>>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56DD17CA.90200>