From owner-freebsd-current@freebsd.org Thu Jun 21 07:48:45 2018 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CC9D0101283B for ; Thu, 21 Jun 2018 07:48:45 +0000 (UTC) (envelope-from tsoome@me.com) Received: from st13p35im-asmtp001.me.com (st13p35im-asmtp001.me.com [17.164.199.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 72DED7118A; Thu, 21 Jun 2018 07:48:45 +0000 (UTC) (envelope-from tsoome@me.com) Received: from process-dkim-sign-daemon.st13p35im-asmtp001.me.com by st13p35im-asmtp001.me.com (Oracle Communications Messaging Server 8.0.1.2.20170607 64bit (built Jun 7 2017)) id <0PAN00D00XHJXR00@st13p35im-asmtp001.me.com>; Thu, 21 Jun 2018 07:48:32 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=04042017; t=1529567312; bh=tUoYShb7x9dmr81gnSBRhei3YJuF8kYMe6ucOl8+D/g=; h=Content-type:MIME-version:Subject:From:Date:Message-id:To; b=wwsfTrJMISsfEeaHQCnb+scoFtXljZMkZi45d+opaA62J4GRbm4GnEjLL5ZfQSLdL rvvsjfmhxAvitfoNRVAaYqTTRVkWvk9SjIPItWGUYIjx4eju3CfCSpjgvpm0tybw1d YfZfCHYDY9l0oOrrz3plXoIeUBD4bzG29+DmruazhHK/qFcLJTwdNwD5MFcXzd+0No 2SgsbCt0k83I/xH81VWmaKl+EGLlgm2eJ4rYxoxiTJdCV0zolbTYDXMqKgmZJglHZo E842V3oa6X9iaJ/BnKk4xRtUdW3N4jJGgR2z/nq49Us6FmVA4v2UG2iUf8/IA/5woA mc47GFbY6ljrQ== Received: from icloud.com ([127.0.0.1]) by st13p35im-asmtp001.me.com (Oracle Communications Messaging Server 8.0.1.2.20170607 64bit (built Jun 7 2017)) with ESMTPSA id <0PAN00LRQXOSV130@st13p35im-asmtp001.me.com>; Thu, 21 Jun 2018 07:48:31 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-06-21_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1011 suspectscore=8 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1806210088 Content-type: text/plain; charset=utf-8 MIME-version: 1.0 (Mac OS X Mail 11.4 \(3445.8.2\)) Subject: Re: ZFS: I/O error - blocks larger than 16777216 are not supported From: Toomas Soome In-reply-to: <201806210600.w5L60mYn079435@kx.openedu.org> Date: Thu, 21 Jun 2018 10:48:28 +0300 Cc: Allan Jude , freebsd-current@freebsd.org Content-transfer-encoding: quoted-printable Message-id: <1CDD5AFE-F115-406C-AB92-9DC58B57E1D5@me.com> References: <201806210136.w5L1a5Nv074194@kx.openedu.org> <21493592-4eb2-59c5-1b0d-e1d08217a96b@freebsd.org> <201806210600.w5L60mYn079435@kx.openedu.org> To: KIRIYAMA Kazuhiko X-Mailer: Apple Mail (2.3445.8.2) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Jun 2018 07:48:46 -0000 > On 21 Jun 2018, at 09:00, KIRIYAMA Kazuhiko = wrote: >=20 > At Wed, 20 Jun 2018 23:34:48 -0400, > Allan Jude wrote: >>=20 >> On 2018-06-20 21:36, KIRIYAMA Kazuhiko wrote: >>> Hi all, >>>=20 >>> I've been reported ZFS boot disable problem [1], and found >>> that this issue occers form RAID configuration [2]. So I >>> rebuit with RAID5 and re-installed 12.0-CURRENT >>> (r333982). But failed to boot with: >>>=20 >>> ZFS: i/o error - all block copies unavailable >>> ZFS: can't read MOS of pool zroot >>> gptzfsboot: failed to mount default pool zroot >>>=20 >>> FreeBSD/x86 boot >>> ZFS: I/O error - blocks larger than 16777216 are not supported >>> ZFS: can't find dataset u >>> Default: zroot/<0x0>: >>>=20 >>> In this case, the reason is "blocks larger than 16777216 are >>> not supported" and I guess this means datasets that have >>> recordsize greater than 8GB is NOT supported by the >>> FreeBSD boot loader(zpool-features(7)). Is that true ? >>>=20 >>> My zpool featues are as follows: >>>=20 >>> # kldload zfs >>> # zpool import=20 >>> pool: zroot >>> id: 13407092850382881815 >>> state: ONLINE >>> status: The pool was last accessed by another system. >>> action: The pool can be imported using its name or numeric = identifier and >>> the '-f' flag. >>> see: http://illumos.org/msg/ZFS-8000-EY >>> config: >>>=20 >>> zroot ONLINE >>> mfid0p3 ONLINE >>> # zpool import -fR /mnt zroot >>> # zpool list >>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH = ALTROOT >>> zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE = /mnt >>> # zpool get all zroot >>> NAME PROPERTY VALUE = SOURCE >>> zroot size 19.9T = - >>> zroot capacity 0% = - >>> zroot altroot /mnt = local >>> zroot health ONLINE = - >>> zroot guid = 13407092850382881815 default >>> zroot version - = default >>> zroot bootfs zroot/ROOT/default = local >>> zroot delegation on = default >>> zroot autoreplace off = default >>> zroot cachefile none = local >>> zroot failmode wait = default >>> zroot listsnapshots off = default >>> zroot autoexpand off = default >>> zroot dedupditto 0 = default >>> zroot dedupratio 1.00x = - >>> zroot free 19.7T = - >>> zroot allocated 129G = - >>> zroot readonly off = - >>> zroot comment - = default >>> zroot expandsize - = - >>> zroot freeing 0 = default >>> zroot fragmentation 0% = - >>> zroot leaked 0 = default >>> zroot feature@async_destroy enabled = local >>> zroot feature@empty_bpobj active = local >>> zroot feature@lz4_compress active = local >>> zroot feature@multi_vdev_crash_dump enabled = local >>> zroot feature@spacemap_histogram active = local >>> zroot feature@enabled_txg active = local >>> zroot feature@hole_birth active = local >>> zroot feature@extensible_dataset enabled = local >>> zroot feature@embedded_data active = local >>> zroot feature@bookmarks enabled = local >>> zroot feature@filesystem_limits enabled = local >>> zroot feature@large_blocks enabled = local >>> zroot feature@sha512 enabled = local >>> zroot feature@skein enabled = local >>> zroot unsupported@com.delphix:device_removal inactive = local >>> zroot unsupported@com.delphix:obsolete_counts inactive = local >>> zroot unsupported@com.delphix:zpool_checkpoint inactive = local >>> #=20 >>>=20 >>> Regards >>>=20 >>> [1] = https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html= >>> [2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D151910 >>>=20 >>> --- >>> KIRIYAMA Kazuhiko >>> _______________________________________________ >>> freebsd-current@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-current >>> To unsubscribe, send any mail to = "freebsd-current-unsubscribe@freebsd.org" >>>=20 >>=20 >> I am guessing it means something is corrupt, as 16MB is the maximum = size >> of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', = not >> 'active', so this suggest you do not have any records larger than = 128kb >> on your pool. >=20 > As I mentioned above, [2] says ZFS on RAID disks have any > serious bugs except for mirror. Anyway I gave up to use ZFS > on RAID{5,6}* until Bug 151910 [2] fixed. >=20 if you boot from usb stick (or cd), press esc at boot loader menu and = enter lsdev -v. what sector and disk sizes are reported? the issue [2] is mix of ancient freebsd (v 8.1 is mentioned there), and = RAID luns with 512B sector size and 15TB!!! total size - are you really = sure your BIOS can actually address 15TB lun (with 512B sector size)? = Note that the problem with large disks can hide itself till you have = pool filled up enough till the essential files will be stored above the = limit=E2=80=A6 meaning that you may have =E2=80=9Cperfectly working=E2=80=9D= setup till at some point in time, after next update, it is suddenly not = working any more. Note that for boot loader we have only INT13h for BIOS version, and it = really is limited. The UEFI version is using EFI_BLOCK_IO API, which = usually can handle large sectors and disk sizes better. rgds, toomas