Date: Tue, 6 Oct 2015 18:42:47 +0300 From: Slawa Olhovchenkov <slw@zxy.spb.ru> To: Sean Kelly <smkelly@smkelly.org> Cc: FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>, Jim Harris <jim.harris@gmail.com> Subject: Re: Dell NVMe issues Message-ID: <20151006154246.GC6469@zxy.spb.ru> In-Reply-To: <27228FE7-5FF9-4F58-9E23-42A66806C374@smkelly.org> References: <BC5F191D-FEB2-4ADC-9D6B-240C80B2301C@smkelly.org> <20151006152955.GA16596@zxy.spb.ru> <27228FE7-5FF9-4F58-9E23-42A66806C374@smkelly.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Oct 06, 2015 at 10:35:57AM -0500, Sean Kelly wrote: > > > > On Oct 6, 2015, at 10:29 AM, Slawa Olhovchenkov <slw@zxy.spb.ru> wrote: > > > > On Tue, Oct 06, 2015 at 10:18:11AM -0500, Sean Kelly wrote: > > > >> Back in May, I posted about issues I was having with a Dell PE R630 with 4x800GB NVMe SSDs. I would get kernel panics due to the inability to assign all the interrupts because of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199321 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199321> <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199321 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199321>>. Jim Harris helped fix this issue so I bought several more of these servers, Including ones with 4x1.6TB drives... > >> > >> while the new servers with 4x800GB drives still work, the ones with 4x1.6TB drives do not. When I do a > >> zpool create tank mirror nvd0 nvd1 mirror nvd2 nvd3 > >> the command never returns and the kernel logs: > >> nvme0: resetting controller > >> nvme0: controller ready did not become 0 within 2000 ms > >> > >> I've tried several different things trying to understand where the actual problem is. > >> WORKS: dd if=/dev/nvd0 of=/dev/null bs=1m > >> WORKS: dd if=/dev/zero of=/dev/nvd0 bs=1m > >> WORKS: newfs /dev/nvd0 > >> FAILS: zpool create tank mirror nvd[01] > >> FAILS: gpart add -t freebsd-zfs nvd[01] && zpool create tank mirror nvd[01]p1 > >> FAILS: gpart add -t freebsd-zfs -s 1400g nvd[01[ && zpool create tank nvd[01]p1 > >> WORKS: gpart add -t freebsd-zfs -s 800g nvd[01] && zpool create tank nvd[01]p1 > >> > >> NOTE: The above commands are more about getting the point across, not validity. I wiped the disk clean between gpart attempts and used GPT. > > > > Just for purity of the experiment: do you try zpool on raw disk, w/o > > GPT? I.e. zpool create tank mirror nvd0 nvd1 > > > > Yes, that was actually what I tried first. I headed down the path of > GPT because it allowed me a way to restrict how much disk zpool > touched. zpool on the bare NVMe disks also triggers the issue. Can you snoop disk i/o operation by dtrace at time zpool create?
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20151006154246.GC6469>