From owner-freebsd-current@freebsd.org Tue Mar 20 08:21:02 2018 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8D90FF5D0F3 for ; Tue, 20 Mar 2018 08:21:02 +0000 (UTC) (envelope-from tsoome@me.com) Received: from st13p35im-asmtp002.me.com (st13p35im-asmtp002.me.com [17.164.199.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 375028686F for ; Tue, 20 Mar 2018 08:21:02 +0000 (UTC) (envelope-from tsoome@me.com) Received: from process-dkim-sign-daemon.st13p35im-asmtp002.me.com by st13p35im-asmtp002.me.com (Oracle Communications Messaging Server 8.0.1.2.20170607 64bit (built Jun 7 2017)) id <0P5V00K00QXN1Q00@st13p35im-asmtp002.me.com> for freebsd-current@freebsd.org; Tue, 20 Mar 2018 08:20:59 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=04042017; t=1521534059; bh=j9sSfNc1277Ik8vm+e0c/HnxsvB5e6K9RUfuZZ37Re0=; h=Content-type:MIME-version:Subject:From:Date:Message-id:To; b=LuB3LXLoJSzMFNe0NNRf5nYkyJ79/TrBt6iFxneBcI5WG6ZKuD4C7wdyUJMDrPs00 VcTHdS7BJbGLFluSI5Of5rqBfxG67bank/Zu0Wjk+FuMCdbtumlroDXE67J3As5Uo+ MLnav+oX7hcBBSb6vkSLP0F2hE9sH45fIVV3rhDdx6zYIrAOM+U9WK/G2VObpA4sMk 65v9Cto9BTuH4tLxxNIvFFKG4qxWTVq5GPES8dTvHowhV/DSOEr8quCSQyvv2u++IJ /DezYgf1LmbI6lYJir9JytQRbzNHr6IPCTvR/N4dOlnagonGInM2o65UumrJ1BLynL Bx8M921sIG2dw== Received: from icloud.com ([127.0.0.1]) by st13p35im-asmtp002.me.com (Oracle Communications Messaging Server 8.0.1.2.20170607 64bit (built Jun 7 2017)) with ESMTPSA id <0P5V00FUNR6W6D20@st13p35im-asmtp002.me.com>; Tue, 20 Mar 2018 08:20:58 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-03-20_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1011 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1803200100 Content-type: text/plain; charset=utf-8 MIME-version: 1.0 (Mac OS X Mail 11.2 \(3445.5.20\)) Subject: Re: ZFS i/o error in recent 12.0 From: Toomas Soome In-reply-to: <20180320085028.0b15ff40@mwoffice.virtualtec.office> Date: Tue, 20 Mar 2018 10:21:02 +0200 Cc: freebsd-current@freebsd.org Content-transfer-encoding: quoted-printable Message-id: <6680868D-F08A-4AF4-B68D-7E20ADBA67D4@me.com> References: <201803192300.w2JN04fx007127@kx.openedu.org> <20180320085028.0b15ff40@mwoffice.virtualtec.office> To: Markus Wild X-Mailer: Apple Mail (2.3445.5.20) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Mar 2018 08:21:02 -0000 > On 20 Mar 2018, at 09:50, Markus Wild wrote: >=20 > Hi there, >=20 >> I've been encountered suddenly death in ZFS full volume >> machine(r330434) about 10 days after installation[1]: >>=20 >> ZFS: i/o error - all block copies unavailable >> ZFS: can't read MOS of pool zroot >> gptzfsboot: failed to mount default pool zroot >>=20 >=20 >> 268847104 30978715648 4 freebsd-zfs (14T) >=20 > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >=20 >=20 > I had faced the exact same issue on a HP Microserver G8 with 8TB disks = and a 16TB zpool on FreeBSD 11 about a year ago. > My conclusion was, that over time (and updating the kernel), the = blocks for that kernel file were reallocated to a > later spot on the disks, and that however the loader fetches those = blocks, it now failed doing so (perhaps a 2/4TB > limit/bug with the BIOS of that server? Unfortunately, there was no = UEFI support for it, don't know whether that > changed in the meantime). The pool was always importable fine with the = USB stick, the problem was only with the boot > loader. I worked around the problem stealing space from the swap = partitions on two disks to build a "zboot" pool, just > containing the /boot directory, having the boot loader load the kernel = from there, and then still mount the real root > pool to run the system off using loader-variables in loader.conf of = the boot pool. It's a hack, but it's working > fine since (the server is being used as a backup repository). This is = what I have in the "zboot" boot/loader.conf: >=20 > # zfs boot kludge due to buggy bios > vfs.root.mountfrom=3D"zfs:zroot/ROOT/fbsd11" >=20 >=20 > If you're facing the same problem, you might give this a shot? You = seem to have plenty of swap to canibalize as well;) >=20 please check with lsdev -v from loader OK prompt - do the reported = disk/partition sizes make sense. Another thing is, even if you do update = the current build, you want to make sure your installed boot blocks are = updated as well - otherwise you will have new binary in the /boot = directory, but it is not installed on boot block area=E2=80=A6 rgds, toomas