Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 27 Apr 2016 13:00:50 -0500
From:      Chris Watson <bsdunix44@gmail.com>
To:        Steven Hartland <killing@multiplay.co.uk>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: zfs on nvme: gnop breaks pool, zfs gets stuck
Message-ID:  <B3C1CBE8-FCBC-4053-9B5B-EE9147D7A01E@gmail.com>
In-Reply-To: <5720EFD8.60900@multiplay.co.uk>
References:  <20160427152244.ff36ff74ae64c1f86fdc960a@aei.mpg.de> <20160427141436.GA60370@in-addr.com> <5720EFD8.60900@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
I think for most people, the gnop hack is what is documented on the web. Hen=
ce why people are using it versus the ashift sysctl. If the sysctl for ashif=
t is not documented in the ZFS section of the handbook, it probably should b=
e.=20

Chris

Sent from my iPhone 5

> On Apr 27, 2016, at 11:59 AM, Steven Hartland <killing@multiplay.co.uk> wr=
ote:
>=20
>=20
>=20
>> On 27/04/2016 15:14, Gary Palmer wrote:
>>> On Wed, Apr 27, 2016 at 03:22:44PM +0200, Gerrit K?hn wrote:
>>> Hello all,
>>>=20
>>> I have a set of three NVME-ssds on PCIe-converters:
>>>=20
>>> ---
>>> root@storage:~ # nvmecontrol devlist
>>>  nvme0: SAMSUNG MZVPV512HDGL-00000
>>>     nvme0ns1 (488386MB)
>>>  nvme1: SAMSUNG MZVPV512HDGL-00000
>>>     nvme1ns1 (488386MB)
>>>  nvme2: SAMSUNG MZVPV512HDGL-00000
>>>     nvme2ns1 (488386MB)
>>> ---
>>>=20
>>>=20
>>> I want to use a z1 raid on these and created 1m-aligned partitions:
>>>=20
>>> ---
>>> root@storage:~ # gpart show
>>> =3D>        34  1000215149  nvd0  GPT  (477G)
>>>           34        2014        - free -  (1.0M)
>>>         2048  1000212480     1  freebsd-zfs  (477G)
>>>   1000214528         655        - free -  (328K)
>>>=20
>>> =3D>        34  1000215149  nvd1  GPT  (477G)
>>>           34        2014        - free -  (1.0M)
>>>         2048  1000212480     1  freebsd-zfs  (477G)
>>>   1000214528         655        - free -  (328K)
>>>=20
>>> =3D>        34  1000215149  nvd2  GPT  (477G)
>>>           34        2014        - free -  (1.0M)
>>>         2048  1000212480     1  freebsd-zfs  (477G)
>>>   1000214528         655        - free -  (328K)
>>> ---
>>>=20
>>>=20
>>> After creating a zpool I recognized that it was using ashift=3D9. I vagu=
ely
>>> remembered that SSDs usually have 4k (or even larger) sectors, so I
>>> destroyed the pool and set up gnop-providers with -S 4k to get ashift=3D=
12.
>>> This worked as expected:
>>>=20
>>> ---
>>>   pool: flash
>>>  state: ONLINE
>>>   scan: none requested
>>> config:
>>>=20
>>>    NAME                STATE     READ WRITE CKSUM
>>>    flash               ONLINE       0     0     0
>>>      raidz1-0          ONLINE       0     0     0
>>>        gpt/flash0.nop  ONLINE       0     0     0
>>>        gpt/flash1.nop  ONLINE       0     0     0
>>>        gpt/flash2.nop  ONLINE       0     0     0
>>>=20
>>> errors: No known data errors
>>> ---
>>>=20
>>>=20
>>> This pool can be used, exported and imported just fine as far as I can
>>> tell. Then I exported the pool and destroyed the gnop-providers. When
>>> starting with "advanced format" hdds some years ago, this was the way to=

>>> make zfs recognize the disks with ashift=3D12. However, destroying the
>>> gnop-devices appears to have crashed the pool in this case:
>>>=20
>>> ---
>>> root@storage:~ # zpool import
>>>    pool: flash
>>>      id: 4978839938025863522
>>>   state: ONLINE
>>>  status: One or more devices contains corrupted data.
>>>  action: The pool can be imported using its name or numeric identifier.
>>>    see: http://illumos.org/msg/ZFS-8000-4J
>>>  config:
>>>=20
>>>    flash                                           ONLINE
>>>      raidz1-0                                      ONLINE
>>>        11456367280316708003                        UNAVAIL  corrupted
>>> data gptid/55ae71aa-eb84-11e5-9298-0cc47a6c7484  ONLINE
>>>        6761786983139564172                         UNAVAIL  corrupted
>>> data
>>> ---
>>>=20
>>>=20
>>> How can the pool be online, when two of three devices are unavailable? I=

>>> tried to import the pool nevertheless, but the zpool command got stuck i=
n
>>> state tx-tx. "soft" reboot got stuck, too. I had to push the reset butto=
n
>>> to get my system back (still with a corrupt pool). I cleared the labels
>>> and re-did everything: the issue is perfectly reproducible.
>>>=20
>>> Am I doing something utterly wrong? Why is removing the gnop-nodes
>>> tampering with the devices (I think I did exactly this dozens of times o=
n
>>> normal hdds during that previous years, and it always worked just fine)?=

>>> And finally, why does the zpool import fail without any error message an=
d
>>> requires me to reset the system?
>>> The system is 10.2-RELEASE-p9, update is scheduled for later this week
>>> (just in case it would make sense to try this again with 10.3). Any othe=
r
>>> hints are most welcome.
>> Did you destroy the gnop devices with the pool online?  In the procedure
>> I remember you export the pool, destroy the gnop devices, and then
>> reimport the pool.
>>=20
>> Also, you only need to do the gnop trick for a single device in the pool
>> for the entire pool's ashift to be changed AFAIK.  There is a sysctl
>> now too
>>=20
>> vfs.zfs.min_auto_ashift
>>=20
>> which lets you manage the ashift on a new pool without having to try
>> the gnop trick
> This applies to each top level vdev that makes up a pool, so its not limit=
ed to just new pool creation, so there should be never a reason to use the g=
nop hack to set ashift.
>=20
>    Regards
>    Steve
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?B3C1CBE8-FCBC-4053-9B5B-EE9147D7A01E>