From owner-freebsd-fs@freebsd.org Wed Apr 27 16:59:28 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9C186B1EA62 for ; Wed, 27 Apr 2016 16:59:28 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x232.google.com (mail-wm0-x232.google.com [IPv6:2a00:1450:400c:c09::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 29B3A1CD1 for ; Wed, 27 Apr 2016 16:59:28 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by mail-wm0-x232.google.com with SMTP id a17so24480911wme.0 for ; Wed, 27 Apr 2016 09:59:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-transfer-encoding; bh=fHQ36Ku2CL6XQycu3paeLzw6jb8zJs59tv1KjXLN220=; b=frj/m9xzi+3jM4QpIMuIR5n4LLevCbi85d5rLIDsIAsByRUP6bhcTGA1V7jNWpXONn nzMgwL5WcpnNT3UxXIryHFFIoIr74MrKvSr6+Sl+FDL2LSY2xCYzGUCpHxzRQqWO4vvp JF5u2lwkrNL9FWMDB+4u9UI1VdYdsQdCXlWJP5zD1a0p1rS9IIMtFPIcARMKGJcWJXsR mlLuUVQ8vXyMqW2I2k3FpaCcPi6NiOgWZminLaEVOgqXzB1M9eTCDnjnYkRR6/Vzbe23 ZS/+a1u818IRdDORivOBJIINkO1nn1zQMXYS0QHZgUsptn28QA3hQkCwZ528GgEZxMNo tQQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=fHQ36Ku2CL6XQycu3paeLzw6jb8zJs59tv1KjXLN220=; b=WimhDhaLaYkNMN3WNRnhLb35MT1ndQGV1FJCEM3dX+d3JXjAw0UJ4YeYmIkUl1pJiv omYGjdJu7CUkX6fVsW9gkdIn4mLgsiVaMG+mIcvr5vdRlykUwv2CRQW8qMUvPZzyhFvy LCKeA2YDlYfb9EMhodsrC/V9TTILxLTAhfYYpa7MUHL/DL+sTmUwjemWYnJGAuad1tKd R7Q7Pr4adh74vpUnylAa8/6RyE+/2LfGf4pWqaQZqsNXewdSSgOaRSVZL9AD01pD79s3 aa8EQs6h+PtqEs9TKP2Oei3RfoSyskIvliEI3RJmw5DCbUHm34Gw/0lCeVHMV2RLFh1D ziUQ== X-Gm-Message-State: AOPr4FUPeOZJzWV1aoZUwvuKe+/BmsodNrjT/rsJkE87sXEdICE6DkLg2Fa6IHU4MAB31m2K X-Received: by 10.28.39.5 with SMTP id n5mr25693499wmn.13.1461776366569; Wed, 27 Apr 2016 09:59:26 -0700 (PDT) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id o4sm4970481wjx.45.2016.04.27.09.59.24 for (version=TLSv1/SSLv3 cipher=OTHER); Wed, 27 Apr 2016 09:59:24 -0700 (PDT) Subject: Re: zfs on nvme: gnop breaks pool, zfs gets stuck To: freebsd-fs@freebsd.org References: <20160427152244.ff36ff74ae64c1f86fdc960a@aei.mpg.de> <20160427141436.GA60370@in-addr.com> From: Steven Hartland Message-ID: <5720EFD8.60900@multiplay.co.uk> Date: Wed, 27 Apr 2016 17:59:04 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.7.2 MIME-Version: 1.0 In-Reply-To: <20160427141436.GA60370@in-addr.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Apr 2016 16:59:28 -0000 On 27/04/2016 15:14, Gary Palmer wrote: > On Wed, Apr 27, 2016 at 03:22:44PM +0200, Gerrit K?hn wrote: >> Hello all, >> >> I have a set of three NVME-ssds on PCIe-converters: >> >> --- >> root@storage:~ # nvmecontrol devlist >> nvme0: SAMSUNG MZVPV512HDGL-00000 >> nvme0ns1 (488386MB) >> nvme1: SAMSUNG MZVPV512HDGL-00000 >> nvme1ns1 (488386MB) >> nvme2: SAMSUNG MZVPV512HDGL-00000 >> nvme2ns1 (488386MB) >> --- >> >> >> I want to use a z1 raid on these and created 1m-aligned partitions: >> >> --- >> root@storage:~ # gpart show >> => 34 1000215149 nvd0 GPT (477G) >> 34 2014 - free - (1.0M) >> 2048 1000212480 1 freebsd-zfs (477G) >> 1000214528 655 - free - (328K) >> >> => 34 1000215149 nvd1 GPT (477G) >> 34 2014 - free - (1.0M) >> 2048 1000212480 1 freebsd-zfs (477G) >> 1000214528 655 - free - (328K) >> >> => 34 1000215149 nvd2 GPT (477G) >> 34 2014 - free - (1.0M) >> 2048 1000212480 1 freebsd-zfs (477G) >> 1000214528 655 - free - (328K) >> --- >> >> >> After creating a zpool I recognized that it was using ashift=9. I vaguely >> remembered that SSDs usually have 4k (or even larger) sectors, so I >> destroyed the pool and set up gnop-providers with -S 4k to get ashift=12. >> This worked as expected: >> >> --- >> pool: flash >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> flash ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> gpt/flash0.nop ONLINE 0 0 0 >> gpt/flash1.nop ONLINE 0 0 0 >> gpt/flash2.nop ONLINE 0 0 0 >> >> errors: No known data errors >> --- >> >> >> This pool can be used, exported and imported just fine as far as I can >> tell. Then I exported the pool and destroyed the gnop-providers. When >> starting with "advanced format" hdds some years ago, this was the way to >> make zfs recognize the disks with ashift=12. However, destroying the >> gnop-devices appears to have crashed the pool in this case: >> >> --- >> root@storage:~ # zpool import >> pool: flash >> id: 4978839938025863522 >> state: ONLINE >> status: One or more devices contains corrupted data. >> action: The pool can be imported using its name or numeric identifier. >> see: http://illumos.org/msg/ZFS-8000-4J >> config: >> >> flash ONLINE >> raidz1-0 ONLINE >> 11456367280316708003 UNAVAIL corrupted >> data gptid/55ae71aa-eb84-11e5-9298-0cc47a6c7484 ONLINE >> 6761786983139564172 UNAVAIL corrupted >> data >> --- >> >> >> How can the pool be online, when two of three devices are unavailable? I >> tried to import the pool nevertheless, but the zpool command got stuck in >> state tx-tx. "soft" reboot got stuck, too. I had to push the reset button >> to get my system back (still with a corrupt pool). I cleared the labels >> and re-did everything: the issue is perfectly reproducible. >> >> Am I doing something utterly wrong? Why is removing the gnop-nodes >> tampering with the devices (I think I did exactly this dozens of times on >> normal hdds during that previous years, and it always worked just fine)? >> And finally, why does the zpool import fail without any error message and >> requires me to reset the system? >> The system is 10.2-RELEASE-p9, update is scheduled for later this week >> (just in case it would make sense to try this again with 10.3). Any other >> hints are most welcome. > Did you destroy the gnop devices with the pool online? In the procedure > I remember you export the pool, destroy the gnop devices, and then > reimport the pool. > > Also, you only need to do the gnop trick for a single device in the pool > for the entire pool's ashift to be changed AFAIK. There is a sysctl > now too > > vfs.zfs.min_auto_ashift > > which lets you manage the ashift on a new pool without having to try > the gnop trick > This applies to each top level vdev that makes up a pool, so its not limited to just new pool creation, so there should be never a reason to use the gnop hack to set ashift. Regards Steve