From owner-freebsd-bugs@freebsd.org Wed Feb 6 18:27:45 2019 Return-Path: Delivered-To: freebsd-bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9A85214DB532 for ; Wed, 6 Feb 2019 18:27:45 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 34EB877FFF for ; Wed, 6 Feb 2019 18:27:45 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.ysv.freebsd.org (Postfix) id E724914DB531; Wed, 6 Feb 2019 18:27:44 +0000 (UTC) Delivered-To: bugs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AC36814DB530 for ; Wed, 6 Feb 2019 18:27:44 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.ysv.freebsd.org (mxrelay.ysv.freebsd.org [IPv6:2001:1900:2254:206a::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.ysv.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 465AD77FFE for ; Wed, 6 Feb 2019 18:27:44 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.ysv.freebsd.org (Postfix) with ESMTPS id 8B5B71F8B9 for ; Wed, 6 Feb 2019 18:27:43 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id x16IRhqA020155 for ; Wed, 6 Feb 2019 18:27:43 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id x16IRhIJ020154 for bugs@FreeBSD.org; Wed, 6 Feb 2019 18:27:43 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 235559] 12.0-STABLE panics on mps drive problem (regression from 11.2 and double-regression from 11.1) Date: Wed, 06 Feb 2019 18:27:43 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 12.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: karl@denninger.net X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter attachments.created Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Feb 2019 18:27:45 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D235559 Bug ID: 235559 Summary: 12.0-STABLE panics on mps drive problem (regression from 11.2 and double-regression from 11.1) Product: Base System Version: 12.0-STABLE Hardware: amd64 OS: Any Status: New Severity: Affects Some People Priority: --- Component: kern Assignee: bugs@FreeBSD.org Reporter: karl@denninger.net Created attachment 201796 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=3D201796&action= =3Dedit Core from latest kernel panic On 11.1, this system was completely stable. I upgraded to 11.2 and started getting CAM timeouts / retries, which I star= ted a thread on at https://lists.freebsd.org/pipermail/freebsd-stable/2019-February/090520.html Note that the card firmware is 19.00.00.00; running 20.00.07.00 (latest available) instead of CAM problems with individual drives I get controller resets, which are *far* worse as the impact is not local. In no case, howe= ver, has data been corrupted -- ZFS is happy with the data and shows no pack err= ors of any sort, nor do the disks themselves using smartctl. The retries are successful. The configuration is a LSI 8-port HBA with a Lenovo 24-port expander attach= ed to one of the LSI connectors; the other has the boot drives on it, as the system and card firmware cannot boot from the expander. This configuration= has been stable for the last several years and up to 11.1-STABLE was flawless. = The drives themselves, backplanes to which they attach, power supply, HBA, SAS expander and cables have all been swapped out with spares here without any change in behavior. The motherboard itself is a XEON with ECC and no RAM errors are being logged. (It's thus reasonable to assume this isn't a hard= ware problem....) The stall and retry itself looks an awful lot like a queued command is being missed or an interrupt lost, both under very heavy load. This typically oc= curs only when the drives in question are slammed at 100% utilization or nearly = so for an extended period of time (e.g. during a scrub or resilver.) I have s= een it on both HGST and Seagate drives of differing capacities, model and firmw= are revision numbers; it does not appear to be related to the disk model or firmware itself. In an attempt to see if this was related to something in 11.2 I rolled the machine forward to 12.0-STABLE. On 12.0-STABLE, r343809, this same conditi= on rather than producing console logs and a successful retry instead results i= n a kernel panic in the driver. The disk I/O in process at the time is a ZFS s= crub and the drive in question is pure data -- it has no executables on it, and = in fact the pool has no mounted filesystems at the time of the panic (it's a backup pool that is imported to serve as a destination for zfs sends used a= s a means of backup.) I have ordered a pair of HBA 16i cards in order to get the expander out of = the case in the hope that will stop the detach events, although I am completely lost in terms of why 11.2 and 12.0 will not run with that configuration whe= re it was entirely stable over the last several releases up through 11.1 with uptimes measured in months; until 11.2 I had never seen even a single panic= out of the disk subsystem on this configuration. Note that if you have all disks attached to the mps driver you can't take a kernel core dump when it happens; any attempt to do so results in a double-panic out of the driver. I have temporarily attached a drive to the onboard SATA ports and set it as dumpdev so as to be able to get the core f= ile. The panic itself bodes poorly for the impact of potential disk problems (re= al ones) where a drive goes offline when attached to the mps driver in 12.0, t= hus this bug report in an attempt to figure out this regression. --=20 You are receiving this mail because: You are the assignee for the bug.=