From owner-freebsd-stable@freebsd.org Wed Apr 25 12:15:10 2018 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3E5C1FA433A for ; Wed, 25 Apr 2018 12:15:10 +0000 (UTC) (envelope-from matheus@eternamente.info) Received: from hobbes.arroway.org (hobbes.arroway.org [173.199.118.77]) by mx1.freebsd.org (Postfix) with ESMTP id D7833786BF for ; Wed, 25 Apr 2018 12:15:09 +0000 (UTC) (envelope-from matheus@eternamente.info) Received: from net.dyn.arroway.org (unknown [177.89.5.111]) by hobbes.arroway.org (Postfix) with ESMTPA id C64623AC84 for ; Wed, 25 Apr 2018 09:15:01 -0300 (BRT) Received: from 186.229.5.58 (SquirrelMail authenticated user matheus) by net.dyn.arroway.org with HTTP; Wed, 25 Apr 2018 09:15:03 -0300 Message-ID: In-Reply-To: <55533ad671792b7a30ff00cd1659a02b.squirrel@10.1.1.10> References: <55533ad671792b7a30ff00cd1659a02b.squirrel@10.1.1.10> Date: Wed, 25 Apr 2018 09:15:03 -0300 Subject: Re: Two USB 4 disk enclosure and a panic From: "Nenhum_de_Nos" To: freebsd-stable@freebsd.org User-Agent: SquirrelMail/1.4.23 [SVN] MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Apr 2018 12:15:10 -0000 On Mon, April 23, 2018 23:18, Nenhum_de_Nos wrote: > Hi, > > I would like to know how to debug this. I have two 4 disk enclosures: > > Mediasonic ProBox 4 Bay 3.5' SATA HDD Enclosure – USB 3.0 & eSATA > (HF2-SU3S2) > NexStar HX4 - NST-640SU3-BK > > and both have 4 disk on them, and not all disk are equal. > > The issue comes when I plug the probox usb3 enclosure on the system. I > can't even read the /var/log/message, it crashes very quickly. > > I can see on the boot process up to the point where the second enclosure > comes to be loaded. The 4 disk are shown on the dmesg/console, and then a > core dump happens, the boot process goes to the debug screen and a restart > happens like a flash. > > The motherboard is a Intel® Desktop Board D525MW running 8GB RAM. > All disk use ZFS, 4 or 5 zpools, one raidz, one mirror and two or three > single disk pools. > FreeBSD xxx 11.1-RELEASE-p7 FreeBSD 11.1-RELEASE-p7 #1 r330596: Thu Mar 8 > 06:45:59 -03 2018 root@xxx:/usr/obj/usr/src/sys/FreeBSD-11-amd64-PF > amd64 > > The kernel is a slightly modified generic, just to have altq. > > How can I debug this? I have no idea. I have to use two machines to run > all those disks, and I would really like to have just one for it. > > Can it be the amount of RAM? The other box is and APU2 from PCEngines and > have 4GB ram. apu2 uname -a: FreeBSD yyy 11.1-RELEASE-p4 FreeBSD > 11.1-RELEASE-p4 #0: Tue Nov 14 06:12:40 UTC 2017 > root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > > I tried to plug the Vantec hardware on the apu2 box, but there it would > not panic, but won't load all vantec disks neither. I really run out of > ideas here :( > > thanks. > > -- > "We will call you Cygnus, > the God of balance you shall be." Hi, I found some logs on the daily security output: +ZFS filesystem version: 5 +ZFS storage pool version: features support (5000) +panic: Solaris(panic): blkptr at 0xfffff8000b93c848 DVA 1 has invalid VDEV 1 +cpuid = 0 +KDB: stack backtrace: +#0 0xffffffff80ab65c7 at kdb_backtrace+0x67 +#1 0xffffffff80a746a6 at vpanic+0x186 +#2 0xffffffff80a74513 at panic+0x43 +#3 0xffffffff82623192 at vcmn_err+0xc2 +#4 0xffffffff824a73ba at zfs_panic_recover+0x5a +#5 0xffffffff824ce893 at zfs_blkptr_verify+0x2d3 +#6 0xffffffff824ce8dc at zio_read+0x2c +#7 0xffffffff82445fb4 at arc_read+0x6c4 +#8 0xffffffff824636a4 at dmu_objset_open_impl+0xd4 +#9 0xffffffff8247eafa at dsl_pool_init+0x2a +#10 0xffffffff8249b093 at spa_load+0x823 +#11 0xffffffff8249a2de at spa_load_best+0x6e +#12 0xffffffff82496a81 at spa_open_common+0x101 +#13 0xffffffff824e2879 at pool_status_check+0x29 +#14 0xffffffff824eba3d at zfsdev_ioctl+0x4ed +#15 0xffffffff809429f8 at devfs_ioctl_f+0x128 +#16 0xffffffff80ad1f15 at kern_ioctl+0x255 +CPU: Intel(R) Atom(TM) CPU D525 @ 1.80GHz (1800.11-MHz K8-class CPU) +avail memory = 8246845440 (7864 MB) +Timecounter "TSC" frequency 1800110007 Hz quality 1000 +GEOM_PART: integrity check failed (ada0s1, BSD) +GEOM_PART: integrity check failed (diskid/DISK-5LZ0ZDBBs1, BSD) +ugen1.2: at usbus1 +ukbd0 on uhub0 +ukbd0: on usbus1 +kbd2 at ukbd0 +ZFS filesystem version: 5 +ZFS storage pool version: features support (5000) +re0: link state changed to DOWN +uhid0 on uhub0 +uhid0: on usbus1 +ums0 on uhub0 +ums0: on usbus1 +ums0: 3 buttons and [XYZ] coordinates ID=0 +re0: promiscuous mode enabled +re0: link state changed to UP For what I see can be ZFS related. If anyone have any hints, please tell :) I kinda got curious about this: +ZFS storage pool version: features support (5000) How can I figure out if my pools are from different versions and may this be the culprit here? thanks, matheus -- "We will call you Cygnus, the God of balance you shall be."