From owner-freebsd-current@freebsd.org Tue Jun 26 02:08:41 2018 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AC7BD1001C79 for ; Tue, 26 Jun 2018 02:08:41 +0000 (UTC) (envelope-from kiri@kx.openedu.org) Received: from kx.openedu.org (flets-sg1027.kamome.or.jp [202.216.24.27]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F0926784CF; Tue, 26 Jun 2018 02:08:40 +0000 (UTC) (envelope-from kiri@kx.openedu.org) Received: from kx.openedu.org (kx.openedu.org [202.216.24.27]) by kx.openedu.org (8.14.5/8.14.5) with ESMTP id w5Q28Una093666; Tue, 26 Jun 2018 11:08:31 +0900 (JST) (envelope-from kiri@kx.openedu.org) Message-Id: <201806260208.w5Q28Una093666@kx.openedu.org> Date: Tue, 26 Jun 2018 11:08:30 +0900 From: KIRIYAMA Kazuhiko To: Toomas Soome Cc: KIRIYAMA Kazuhiko , Allan Jude , freebsd-current@freebsd.org Subject: Re: ZFS: I/O error - blocks larger than 16777216 are not supported In-Reply-To: <1CDD5AFE-F115-406C-AB92-9DC58B57E1D5@me.com> References: <201806210136.w5L1a5Nv074194@kx.openedu.org> <21493592-4eb2-59c5-1b0d-e1d08217a96b@freebsd.org> <201806210600.w5L60mYn079435@kx.openedu.org> <1CDD5AFE-F115-406C-AB92-9DC58B57E1D5@me.com> User-Agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?ISO-8859-4?Q?Goj=F2?=) APEL/10.6 MULE XEmacs/21.4 (patch 22) (Instant Classic) (amd64--freebsd) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=X-UNKNOWN Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Jun 2018 02:08:41 -0000 At Thu, 21 Jun 2018 10:48:28 +0300, Toomas Soome wrote: >=20 >=20 >=20 > > On 21 Jun 2018, at 09:00, KIRIYAMA Kazuhiko wrote: > >=20 > > At Wed, 20 Jun 2018 23:34:48 -0400, > > Allan Jude wrote: > >>=20 > >> On 2018-06-20 21:36, KIRIYAMA Kazuhiko wrote: > >>> Hi all, > >>>=20 > >>> I've been reported ZFS boot disable problem [1], and found > >>> that this issue occers form RAID configuration [2]. So I > >>> rebuit with RAID5 and re-installed 12.0-CURRENT > >>> (r333982). But failed to boot with: > >>>=20 > >>> ZFS: i/o error - all block copies unavailable > >>> ZFS: can't read MOS of pool zroot > >>> gptzfsboot: failed to mount default pool zroot > >>>=20 > >>> FreeBSD/x86 boot > >>> ZFS: I/O error - blocks larger than 16777216 are not supported > >>> ZFS: can't find dataset u > >>> Default: zroot/<0x0>: > >>>=20 > >>> In this case, the reason is "blocks larger than 16777216 are > >>> not supported" and I guess this means datasets that have > >>> recordsize greater than 8GB is NOT supported by the > >>> FreeBSD boot loader(zpool-features(7)). Is that true ? > >>>=20 > >>> My zpool featues are as follows: > >>>=20 > >>> # kldload zfs > >>> # zpool import=20 > >>> pool: zroot > >>> id: 13407092850382881815 > >>> state: ONLINE > >>> status: The pool was last accessed by another system. > >>> action: The pool can be imported using its name or numeric identifier= and > >>> the '-f' flag. > >>> see: http://illumos.org/msg/ZFS-8000-EY > >>> config: > >>>=20 > >>> zroot ONLINE > >>> mfid0p3 ONLINE > >>> # zpool import -fR /mnt zroot > >>> # zpool list > >>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH AL= TROOT > >>> zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /m= nt > >>> # zpool get all zroot > >>> NAME PROPERTY VALUE = SOURCE > >>> zroot size 19.9T = - > >>> zroot capacity 0% = - > >>> zroot altroot /mnt = local > >>> zroot health ONLINE = - > >>> zroot guid 13407092850382881815= default > >>> zroot version - = default > >>> zroot bootfs zroot/ROOT/default = local > >>> zroot delegation on = default > >>> zroot autoreplace off = default > >>> zroot cachefile none = local > >>> zroot failmode wait = default > >>> zroot listsnapshots off = default > >>> zroot autoexpand off = default > >>> zroot dedupditto 0 = default > >>> zroot dedupratio 1.00x = - > >>> zroot free 19.7T = - > >>> zroot allocated 129G = - > >>> zroot readonly off = - > >>> zroot comment - = default > >>> zroot expandsize - = - > >>> zroot freeing 0 = default > >>> zroot fragmentation 0% = - > >>> zroot leaked 0 = default > >>> zroot feature@async_destroy enabled = local > >>> zroot feature@empty_bpobj active = local > >>> zroot feature@lz4_compress active = local > >>> zroot feature@multi_vdev_crash_dump enabled = local > >>> zroot feature@spacemap_histogram active = local > >>> zroot feature@enabled_txg active = local > >>> zroot feature@hole_birth active = local > >>> zroot feature@extensible_dataset enabled = local > >>> zroot feature@embedded_data active = local > >>> zroot feature@bookmarks enabled = local > >>> zroot feature@filesystem_limits enabled = local > >>> zroot feature@large_blocks enabled = local > >>> zroot feature@sha512 enabled = local > >>> zroot feature@skein enabled = local > >>> zroot unsupported@com.delphix:device_removal inactive = local > >>> zroot unsupported@com.delphix:obsolete_counts inactive = local > >>> zroot unsupported@com.delphix:zpool_checkpoint inactive = local > >>> #=20 > >>>=20 > >>> Regards > >>>=20 > >>> [1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/06= 8886.html > >>> [2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D151910 > >>>=20 > >>> --- > >>> KIRIYAMA Kazuhiko > >>> _______________________________________________ > >>> freebsd-current@freebsd.org mailing list > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-current > >>> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd= .org" > >>>=20 > >>=20 > >> I am guessing it means something is corrupt, as 16MB is the maximum si= ze > >> of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not > >> 'active', so this suggest you do not have any records larger than 128kb > >> on your pool. > >=20 > > As I mentioned above, [2] says ZFS on RAID disks have any > > serious bugs except for mirror. Anyway I gave up to use ZFS > > on RAID{5,6}* until Bug 151910 [2] fixed. > >=20 >=20 > if you boot from usb stick (or cd), press esc at boot loader menu and ent= er lsdev -v. what sector and disk sizes are reported? OK lsdev -v disk devices: disk0: BIOS drive C (31588352 X 512) disk0p1: FreeBSD boot 512KB disk0p2: FreeBSD UFS 13GB disk0p3: FreeBSD swap 771MB disk1: BIOS drive D (4294967295 X 512) disk0p1: FreeBSD boot 512KB disk0p2: FreeBSD swap 128GB disk0p3: FreeBSD ZFS 19TB OK Does this means whole disk size that I can use is 2TB (4294967295 X 512) ?=20 >=20 > the issue [2] is mix of ancient freebsd (v 8.1 is mentioned there), and R= AID luns with 512B sector size and 15TB!!! total size - are you really sure= your BIOS can actually address 15TB lun (with 512B sector size)? Note that= the problem with large disks can hide itself till you have pool filled up = enough till the essential files will be stored above the limit~ meaning th= at you may have ~perfectly working~ setup till at some point in time, after= next update, it is suddenly not working any more. >=20 I see why I could use for a while. > Note that for boot loader we have only INT13h for BIOS version, and it re= ally is limited. The UEFI version is using EFI_BLOCK_IO API, which usually = can handle large sectors and disk sizes better. I re-installed the machine with UEFI boot: # gpart show mfid0 =3D> 40 42965401520 mfid0 GPT (20T) 40 409600 1 efi (200M) 409640 2008 - free - (1.0M) 411648 268435456 2 freebsd-swap (128G) 268847104 42696552448 3 freebsd-zfs (20T) 42965399552 2008 - free - (1.0M) # uname -a FreeBSD vm.openedu.org 12.0-CURRENT FreeBSD 12.0-CURRENT #0 r335317: Mon Ju= n 18 16:21:17 UTC 2018 root@releng3.nyi.freebsd.org:/usr/obj/usr/src/am= d64.amd64/sys/GENERIC amd64 # zpool get all zroot NAME PROPERTY VALUE SOURCE zroot size 19.9T - zroot capacity 0% - zroot altroot - default zroot health ONLINE - zroot guid 11079446129259852576 default zroot version - default zroot bootfs zroot/ROOT/default local zroot delegation on default zroot autoreplace off default zroot cachefile - default zroot failmode wait default zroot listsnapshots off default zroot autoexpand off default zroot dedupditto 0 default zroot dedupratio 1.00x - zroot free 19.9T - zroot allocated 1.67G - zroot readonly off - zroot comment - default zroot expandsize - - zroot freeing 0 default zroot fragmentation 0% - zroot leaked 0 default zroot bootsize - default zroot checkpoint - - zroot feature@async_destroy enabled local zroot feature@empty_bpobj active local zroot feature@lz4_compress active local zroot feature@multi_vdev_crash_dump enabled local zroot feature@spacemap_histogram active local zroot feature@enabled_txg active local zroot feature@hole_birth active local zroot feature@extensible_dataset enabled local zroot feature@embedded_data active local zroot feature@bookmarks enabled local zroot feature@filesystem_limits enabled local zroot feature@large_blocks enabled local zroot feature@sha512 enabled local zroot feature@skein enabled local zroot feature@device_removal enabled local zroot feature@obsolete_counts enabled local zroot feature@zpool_checkpoint enabled local #=20 and checked 'lsdev -v' at loader prompt: OK lsdev -v PciRoot(0x0)/Pci(0x1,0x0)/Pci(0x0,0x0)/VenHw(CF31FAC5-C24E-11D2-85F3-00A0= C93EC93B,80) disk0: 4294967295 X 512 blocks disk0p1: EFI 200MB disk0p2: FreeBSD swap 128GB disk0p2: FreeBSD ZFS 19TB net devices: zfs devices: pool: zroot bootfs: zroot/ROOT/default config: NAME STATE zroot ONLINE mfid0p3 ONLINE OK but disk size (4294967295 X 512) still not changed or this means 4294967295 X 512 X 512 bytes ? >=20 > rgds, > toomas >=20 > _______________________________________________ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" Regards --- KIRIYAMA Kazuhiko