Date: Thu, 28 Apr 2016 05:52:48 +0000 From: =?utf-8?B?S2FybGkgU2rDtmJlcmc=?= <karli.sjoberg@slu.se> To: "gerrit.kuehn@aei.mpg.de" <gerrit.kuehn@aei.mpg.de>, "gpalmer@freebsd.org" <gpalmer@freebsd.org> Cc: "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org> Subject: Re: zfs on nvme: gnop breaks pool, zfs gets stuck Message-ID: <1461822768.16231.6.camel@slu.se> In-Reply-To: <20160428074846.a28da3113294f1edc6ed9ce6@aei.mpg.de> References: <20160427152244.ff36ff74ae64c1f86fdc960a@aei.mpg.de> <20160427141436.GA60370@in-addr.com> <20160428074846.a28da3113294f1edc6ed9ce6@aei.mpg.de>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 2016-04-28 at 07:48 +0200, Gerrit Kühn wrote: > On Wed, 27 Apr 2016 15:14:36 +0100 Gary Palmer <gpalmer@freebsd.org> > wrote > about Re: zfs on nvme: gnop breaks pool, zfs gets stuck: > > GP> vfs.zfs.min_auto_ashift > GP> > GP> which lets you manage the ashift on a new pool without having to > try > GP> the gnop trick > > I just tried this, and it appears to work fine: > > --- > root@storage:~ # sysctl vfs.zfs.min_auto_ashift > vfs.zfs.min_auto_ashift: 12 > > root@storage:~ # zpool status > pool: data > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > raidz2-0 ONLINE 0 0 0 > gpt/disk0 ONLINE 0 0 0 > gpt/disk1 ONLINE 0 0 0 > gpt/disk2 ONLINE 0 0 0 > gpt/disk3 ONLINE 0 0 0 > gpt/disk4 ONLINE 0 0 0 > gpt/disk5 ONLINE 0 0 0 > gpt/disk6 ONLINE 0 0 0 > gpt/disk7 ONLINE 0 0 0 > gpt/disk8 ONLINE 0 0 0 > gpt/disk9 ONLINE 0 0 0 > gpt/disk10 ONLINE 0 0 0 > gpt/disk11 ONLINE 0 0 0 > > errors: No known data errors > > pool: flash > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > flash ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > gpt/flash0 ONLINE 0 0 0 > gpt/flash1 ONLINE 0 0 0 > gpt/flash2 ONLINE 0 0 0 > > errors: No known data errors > > root@storage:~ # zdb | grep ashift > ashift: 12 > ashift: 12 > > root@storage:~ # zpool list > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH AL > TROOT > data 65T 1.88M 65.0T - 0% 0% 1.00x ONLINE - > flash 1.39T 800K 1.39T - 0% 0% 1.00x ONLINE - > > --- > > > I still wonder why the gnop workaround went so terribly wrong. Again, because you need to tell zfs where the providers are: # zpool import -d /dev/gpt flash /K > Anyway, > thanks again for pointing out this new sysctl to me! > > And for the record: this is what I get with a simple linear write > test: > > --- > root@storage:~ # dd if=/dev/zero of=/flash/Z bs=1024k count=10000 > 10000+0 records in > 10000+0 records out > 10485760000 bytes transferred in 3.912829 secs (2679840997 bytes/sec) > --- > > > I guess I won't complain... > > > cu > Gerrit > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1461822768.16231.6.camel>
