From owner-freebsd-fs@freebsd.org Wed Apr 27 18:00:54 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8855AB1E2EC for ; Wed, 27 Apr 2016 18:00:54 +0000 (UTC) (envelope-from bsdunix44@gmail.com) Received: from mail-oi0-x233.google.com (mail-oi0-x233.google.com [IPv6:2607:f8b0:4003:c06::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 55C38124B for ; Wed, 27 Apr 2016 18:00:54 +0000 (UTC) (envelope-from bsdunix44@gmail.com) Received: by mail-oi0-x233.google.com with SMTP id x19so57612458oix.2 for ; Wed, 27 Apr 2016 11:00:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=references:mime-version:in-reply-to:content-transfer-encoding :message-id:cc:from:subject:date:to; bh=HQbqbywN/cMbr1u5TVIej631mkJiGlJag+BcnttVJgg=; b=lA8fqdc02fROV4PZrvpgUF6+nifBZbzBLv2tBt6ui0mBKAFkhLs65nVx85hWmJgRst YXhDf268vXgYcSKRM4RWNt6Ok9Nacf6bbera7bhxq4zzCwg/elqrp/v/CTUn2k8fGj6X uTDJ6x6BrV4eLps+yfDfzly+Ee8RtdAMi2BttNBKvN91KxOP8BL4NRTxfFz07JabMhY1 RfONMxFYyNUlgW43SkEAJNyM+lxAYOPvs2QixdIKNYVhNUHOG1Nn14/EQOVSmHSTf8XR 65sQZw2xqX8UjoJPh3+Xz6Y4ioANJaDxGkLmpmx7Yh3ScwWn2z5Dkyy2GS2fuiiPpg1Z rIag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:references:mime-version:in-reply-to :content-transfer-encoding:message-id:cc:from:subject:date:to; bh=HQbqbywN/cMbr1u5TVIej631mkJiGlJag+BcnttVJgg=; b=I7+BBgsVeGQVnVEylT4hEpdAEoZOooShuB3UvqyqBT+Hk/mXSL+ylluTkVqwEJGDzk 51dUirkpnJ1rccaVntrR5DWyuI/czlnG7pw1WjJAql963WXhfq0RgOPQTseWDLxXPR2P n2rt/voODNhThje1LLV+55/t0DlEmpbgt/4ZLd4WP4PM+Y4SNYm16NL0Qyjb9YEShpV7 XT4L0KGLX9yYkt/2AkJorOZbcrPCtO5NcdNW1H3hPTVQr40kAuswbaHuNT1geoC0O4qK 4mLQRCm9HaYBHd+9PAlIANSlyx46fAJ+GDnctCP28FY94TySg4erEbNpiVkJ95MSWWfT tbMQ== X-Gm-Message-State: AOPr4FVrvEzqZwmxvn26zzbg+HsqVf2YZSE/xrheICH2nSIhBDG4U68arHrNMKG8g/qPsA== X-Received: by 10.202.105.78 with SMTP id e75mr4064395oic.132.1461780053634; Wed, 27 Apr 2016 11:00:53 -0700 (PDT) Received: from [192.168.0.10] (cpe-70-118-225-173.kc.res.rr.com. [70.118.225.173]) by smtp.gmail.com with ESMTPSA id u41sm721684otd.37.2016.04.27.11.00.51 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 27 Apr 2016 11:00:52 -0700 (PDT) References: <20160427152244.ff36ff74ae64c1f86fdc960a@aei.mpg.de> <20160427141436.GA60370@in-addr.com> <5720EFD8.60900@multiplay.co.uk> Mime-Version: 1.0 (1.0) In-Reply-To: <5720EFD8.60900@multiplay.co.uk> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Message-Id: Cc: freebsd-fs@freebsd.org X-Mailer: iPhone Mail (13E238) From: Chris Watson Subject: Re: zfs on nvme: gnop breaks pool, zfs gets stuck Date: Wed, 27 Apr 2016 13:00:50 -0500 To: Steven Hartland X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Apr 2016 18:00:54 -0000 I think for most people, the gnop hack is what is documented on the web. Hen= ce why people are using it versus the ashift sysctl. If the sysctl for ashif= t is not documented in the ZFS section of the handbook, it probably should b= e.=20 Chris Sent from my iPhone 5 > On Apr 27, 2016, at 11:59 AM, Steven Hartland wr= ote: >=20 >=20 >=20 >> On 27/04/2016 15:14, Gary Palmer wrote: >>> On Wed, Apr 27, 2016 at 03:22:44PM +0200, Gerrit K?hn wrote: >>> Hello all, >>>=20 >>> I have a set of three NVME-ssds on PCIe-converters: >>>=20 >>> --- >>> root@storage:~ # nvmecontrol devlist >>> nvme0: SAMSUNG MZVPV512HDGL-00000 >>> nvme0ns1 (488386MB) >>> nvme1: SAMSUNG MZVPV512HDGL-00000 >>> nvme1ns1 (488386MB) >>> nvme2: SAMSUNG MZVPV512HDGL-00000 >>> nvme2ns1 (488386MB) >>> --- >>>=20 >>>=20 >>> I want to use a z1 raid on these and created 1m-aligned partitions: >>>=20 >>> --- >>> root@storage:~ # gpart show >>> =3D> 34 1000215149 nvd0 GPT (477G) >>> 34 2014 - free - (1.0M) >>> 2048 1000212480 1 freebsd-zfs (477G) >>> 1000214528 655 - free - (328K) >>>=20 >>> =3D> 34 1000215149 nvd1 GPT (477G) >>> 34 2014 - free - (1.0M) >>> 2048 1000212480 1 freebsd-zfs (477G) >>> 1000214528 655 - free - (328K) >>>=20 >>> =3D> 34 1000215149 nvd2 GPT (477G) >>> 34 2014 - free - (1.0M) >>> 2048 1000212480 1 freebsd-zfs (477G) >>> 1000214528 655 - free - (328K) >>> --- >>>=20 >>>=20 >>> After creating a zpool I recognized that it was using ashift=3D9. I vagu= ely >>> remembered that SSDs usually have 4k (or even larger) sectors, so I >>> destroyed the pool and set up gnop-providers with -S 4k to get ashift=3D= 12. >>> This worked as expected: >>>=20 >>> --- >>> pool: flash >>> state: ONLINE >>> scan: none requested >>> config: >>>=20 >>> NAME STATE READ WRITE CKSUM >>> flash ONLINE 0 0 0 >>> raidz1-0 ONLINE 0 0 0 >>> gpt/flash0.nop ONLINE 0 0 0 >>> gpt/flash1.nop ONLINE 0 0 0 >>> gpt/flash2.nop ONLINE 0 0 0 >>>=20 >>> errors: No known data errors >>> --- >>>=20 >>>=20 >>> This pool can be used, exported and imported just fine as far as I can >>> tell. Then I exported the pool and destroyed the gnop-providers. When >>> starting with "advanced format" hdds some years ago, this was the way to= >>> make zfs recognize the disks with ashift=3D12. However, destroying the >>> gnop-devices appears to have crashed the pool in this case: >>>=20 >>> --- >>> root@storage:~ # zpool import >>> pool: flash >>> id: 4978839938025863522 >>> state: ONLINE >>> status: One or more devices contains corrupted data. >>> action: The pool can be imported using its name or numeric identifier. >>> see: http://illumos.org/msg/ZFS-8000-4J >>> config: >>>=20 >>> flash ONLINE >>> raidz1-0 ONLINE >>> 11456367280316708003 UNAVAIL corrupted >>> data gptid/55ae71aa-eb84-11e5-9298-0cc47a6c7484 ONLINE >>> 6761786983139564172 UNAVAIL corrupted >>> data >>> --- >>>=20 >>>=20 >>> How can the pool be online, when two of three devices are unavailable? I= >>> tried to import the pool nevertheless, but the zpool command got stuck i= n >>> state tx-tx. "soft" reboot got stuck, too. I had to push the reset butto= n >>> to get my system back (still with a corrupt pool). I cleared the labels >>> and re-did everything: the issue is perfectly reproducible. >>>=20 >>> Am I doing something utterly wrong? Why is removing the gnop-nodes >>> tampering with the devices (I think I did exactly this dozens of times o= n >>> normal hdds during that previous years, and it always worked just fine)?= >>> And finally, why does the zpool import fail without any error message an= d >>> requires me to reset the system? >>> The system is 10.2-RELEASE-p9, update is scheduled for later this week >>> (just in case it would make sense to try this again with 10.3). Any othe= r >>> hints are most welcome. >> Did you destroy the gnop devices with the pool online? In the procedure >> I remember you export the pool, destroy the gnop devices, and then >> reimport the pool. >>=20 >> Also, you only need to do the gnop trick for a single device in the pool >> for the entire pool's ashift to be changed AFAIK. There is a sysctl >> now too >>=20 >> vfs.zfs.min_auto_ashift >>=20 >> which lets you manage the ashift on a new pool without having to try >> the gnop trick > This applies to each top level vdev that makes up a pool, so its not limit= ed to just new pool creation, so there should be never a reason to use the g= nop hack to set ashift. >=20 > Regards > Steve > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"