From owner-freebsd-bugs@freebsd.org Sat Jul 22 22:24:38 2017 Return-Path: Delivered-To: freebsd-bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 52976DAEA39 for ; Sat, 22 Jul 2017 22:24:38 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 38B736763D for ; Sat, 22 Jul 2017 22:24:38 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id v6MMOZ3a004043 for ; Sat, 22 Jul 2017 22:24:38 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-bugs@FreeBSD.org Subject: [Bug 211713] NVME controller failure: resetting (Samsung SM961 SSD Drives) Date: Sat, 22 Jul 2017 22:24:36 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.3-RELEASE X-Bugzilla-Keywords: needs-qa X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: stb@lassitu.de X-Bugzilla-Status: Open X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Flags: mfc-stable10? mfc-stable11? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Jul 2017 22:24:38 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211713 --- Comment #37 from stb@lassitu.de --- [root@foo ~]# diskinfo -t /dev/nvd0 /dev/nvd0 512 # sectorsize 128035676160 # mediasize in bytes (119G) 250069680 # mediasize in sectors 0 # stripesize 0 # stripeoffset S347NY0HB01730 # Disk ident. Seek times: Full stroke: 250 iter in 0.014551 sec =3D 0.058 msec Half stroke: 250 iter in 0.015022 sec =3D 0.060 msec Quarter stroke: 500 iter in 0.029067 sec =3D 0.058 msec Short forward: 400 iter in 0.015134 sec =3D 0.038 msec Short backward: 400 iter in 0.015675 sec =3D 0.039 msec Seq outer: 2048 iter in 0.063374 sec =3D 0.031 msec Seq inner: 2048 iter in 0.057973 sec =3D 0.028 msec Transfer rates: outside: 102400 kbytes in 0.094174 sec =3D 1087349 kbytes/= sec middle: 102400 kbytes in 0.089065 sec =3D 1149722 kbytes/= sec inside: 102400 kbytes in 0.089141 sec =3D 1148742 kbytes/= sec I think I should be getting 2.2GB/s. With 4 concurrent dd's, gstat shows: [root@foo ~]# gstat -I60s -f '^....$' dT: 60.002s w: 60.000s filter: ^....$ L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 4 13578 13578 1737975 0.3 0 0 0.0 100.0| nvd0 [root@foo ~]# for i in 0 1 2 3; do dd if=3D/dev/nvd0 of=3D/dev/null bs=3D1m count=3D100k & done; wait; echo 'done' [1] 41520 [2] 44578 [3] 46696 [4] 47833 102400+0 records in 102400+0 records out 107374182400 bytes transferred in 192.262522 secs (558476927 bytes/sec) [1] Done dd if=3D/dev/nvd0 of=3D/dev/null bs=3D1m coun= t=3D100k 102400+0 records in 102400+0 records out 107374182400 bytes transferred in 241.421031 secs (444759026 bytes/sec) [2] Done dd if=3D/dev/nvd0 of=3D/dev/null bs=3D1m coun= t=3D100k 102400+0 records in 102400+0 records out 107374182400 bytes transferred in 241.552144 secs (444517613 bytes/sec) [3]- Done dd if=3D/dev/nvd0 of=3D/dev/null bs=3D1m coun= t=3D100k 102400+0 records in 102400+0 records out 107374182400 bytes transferred in 241.559861 secs (444503412 bytes/sec) [4]+ Done dd if=3D/dev/nvd0 of=3D/dev/null bs=3D1m coun= t=3D100k done So I'm guessing the penalty is not too big. The 128 GB model has a significantly lower write speed compared to the 256GB and 512GB models (aro= und 800MB/s I believe), so I didn't test that. --=20 You are receiving this mail because: You are the assignee for the bug.=