From owner-freebsd-fs@FreeBSD.ORG Wed Mar 10 18:17:10 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F0547106564A for ; Wed, 10 Mar 2010 18:17:10 +0000 (UTC) (envelope-from johan@stromnet.se) Received: from core.stromnet.se (core.stromnet.se [83.218.84.131]) by mx1.freebsd.org (Postfix) with ESMTP id E93748FC19 for ; Wed, 10 Mar 2010 18:17:09 +0000 (UTC) Received: from localhost (core.stromnet.se [83.218.84.131]) by core.stromnet.se (Postfix) with ESMTP id 8E48EF57464; Wed, 10 Mar 2010 19:17:07 +0100 (CET) X-Virus-Scanned: amavisd-new at stromnet.se X-Spam-Flag: NO X-Spam-Score: 0.721 X-Spam-Level: X-Spam-Status: No, score=0.721 tagged_above=-1000 required=6.2 tests=[AWL=0.745, BAYES_00=-2.599, RCVD_IN_PBL=0.905, RCVD_IN_SORBS_DUL=0.877, RDNS_DYNAMIC=0.1, SPF_FAIL=0.693] autolearn=no Received: from core.stromnet.se ([83.218.84.131]) by localhost (core.stromnet.se [83.218.84.131]) (amavisd-new, port 10024) with ESMTP id rYP026E4xpXb; Wed, 10 Mar 2010 19:17:03 +0100 (CET) Received: from johan-mb.vpn.stromnet.se (c-b4fe70d5.016-325-67626721.cust.bredbandsbolaget.se [213.112.254.180]) by core.stromnet.se (Postfix) with ESMTP id 97E24F5743B; Wed, 10 Mar 2010 19:17:03 +0100 (CET) Message-Id: <02981D7A-09B4-4964-9E2F-63646B2C7129@stromnet.se> From: =?ISO-8859-1?Q?Johan_Str=F6m?= To: =?ISO-8859-1?Q?Johan_Str=F6m?= In-Reply-To: <4231C45D-499B-4FC9-90C3-BCC34DF20965@stromnet.se> Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes Content-Transfer-Encoding: quoted-printable Mime-Version: 1.0 (Apple Message framework v936) Date: Wed, 10 Mar 2010 19:17:03 +0100 References: <8BA8475A-424E-489B-B643-8757F375320B@stromnet.se> <4231C45D-499B-4FC9-90C3-BCC34DF20965@stromnet.se> X-Mailer: Apple Mail (2.936) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: zpool import hang on "zio->io_cv)" (with DDB output). Help needed! X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Mar 2010 18:17:11 -0000 Labels look alright: # zdb -l /dev/ad6 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- version=3D13 name=3D'bench' state=3D2 txg=3D2166 pool_guid=3D10982925876172809874 hostid=3D224084357 hostname=3D'back-1.stromnet.se' top_guid=3D8240321046841083771 guid=3D6775655865709593495 vdev_tree type=3D'raidz' id=3D0 guid=3D8240321046841083771 nparity=3D1 metaslab_array=3D23 metaslab_shift=3D35 ashift=3D9 asize=3D6001182375936 is_log=3D0 children[0] type=3D'disk' id=3D0 guid=3D1346858432892182394 path=3D'/dev/ad6' whole_disk=3D0 children[1] type=3D'disk' id=3D1 guid=3D5713829410293466138 path=3D'/dev/ad16' whole_disk=3D0 children[2] type=3D'disk' id=3D2 guid=3D6775655865709593495 path=3D'/dev/ad18' whole_disk=3D0 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 # zdb -l /dev/ad16 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- version=3D13 name=3D'bench' state=3D2 txg=3D2166 pool_guid=3D10982925876172809874 hostid=3D224084357 hostname=3D'back-1.stromnet.se' top_guid=3D8240321046841083771 guid=3D5713829410293466138 vdev_tree type=3D'raidz' id=3D0 guid=3D8240321046841083771 nparity=3D1 metaslab_array=3D23 metaslab_shift=3D35 ashift=3D9 asize=3D6001182375936 is_log=3D0 children[0] type=3D'disk' id=3D0 guid=3D1346858432892182394 path=3D'/dev/ad6' whole_disk=3D0 children[1] type=3D'disk' id=3D1 guid=3D5713829410293466138 path=3D'/dev/ad16' whole_disk=3D0 children[2] type=3D'disk' id=3D2 guid=3D6775655865709593495 path=3D'/dev/ad18' whole_disk=3D0 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 # zdb -l /dev/ad18 -------------------------------------------- LABEL 0 -------------------------------------------- version=3D13 name=3D'bench' state=3D2 txg=3D8 pool_guid=3D13542658689232285344 hostid=3D224084357 hostname=3D'back-1.stromnet.se' top_guid=3D14685823632815031808 guid=3D3228666435075435579 vdev_tree type=3D'raidz' id=3D0 guid=3D14685823632815031808 nparity=3D1 metaslab_array=3D23 metaslab_shift=3D36 ashift=3D9 asize=3D8001555529728 is_log=3D0 children[0] type=3D'disk' id=3D0 guid=3D6901292936454137887 path=3D'/dev/ad6' whole_disk=3D0 children[1] type=3D'disk' id=3D1 guid=3D10179612237289717409 path=3D'/dev/ad16' whole_disk=3D0 children[2] type=3D'disk' id=3D2 guid=3D9860099641022027014 path=3D'/dev/ad18' whole_disk=3D0 children[3] type=3D'disk' id=3D3 guid=3D3228666435075435579 path=3D'/dev/amrd2' whole_disk=3D0 -------------------------------------------- LABEL 1 -------------------------------------------- version=3D13 name=3D'bench' state=3D2 txg=3D8 pool_guid=3D13542658689232285344 hostid=3D224084357 hostname=3D'back-1.stromnet.se' top_guid=3D14685823632815031808 guid=3D3228666435075435579 vdev_tree type=3D'raidz' id=3D0 guid=3D14685823632815031808 nparity=3D1 metaslab_array=3D23 metaslab_shift=3D36 ashift=3D9 asize=3D8001555529728 is_log=3D0 children[0] type=3D'disk' id=3D0 guid=3D6901292936454137887 path=3D'/dev/ad6' whole_disk=3D0 children[1] type=3D'disk' id=3D1 guid=3D10179612237289717409 path=3D'/dev/ad16' whole_disk=3D0 children[2] type=3D'disk' id=3D2 guid=3D9860099641022027014 path=3D'/dev/ad18' whole_disk=3D0 children[3] type=3D'disk' id=3D3 guid=3D3228666435075435579 path=3D'/dev/amrd2' whole_disk=3D0 -------------------------------------------- LABEL 2 -------------------------------------------- version=3D13 name=3D'test' state=3D0 txg=3D4 pool_guid=3D4936907373550577113 hostname=3D'' top_guid=3D3609426615657631821 guid=3D13380697164729542683 vdev_tree type=3D'raidz' id=3D0 guid=3D3609426615657631821 nparity=3D1 metaslab_array=3D23 metaslab_shift=3D36 ashift=3D9 asize=3D8001576501248 is_log=3D0 children[0] type=3D'disk' id=3D0 guid=3D13380697164729542683 path=3D'/dev/ad4' whole_disk=3D0 children[1] type=3D'disk' id=3D1 guid=3D10325582179578809077 path=3D'/dev/ad6' whole_disk=3D0 children[2] type=3D'disk' id=3D2 guid=3D4289142720340552060 path=3D'/dev/ad8' whole_disk=3D0 children[3] type=3D'disk' id=3D3 guid=3D8524377709988872861 path=3D'/dev/ad10' whole_disk=3D0 -------------------------------------------- LABEL 3 -------------------------------------------- version=3D13 name=3D'test' state=3D0 txg=3D4 pool_guid=3D4936907373550577113 hostname=3D'' top_guid=3D3609426615657631821 guid=3D13380697164729542683 vdev_tree type=3D'raidz' id=3D0 guid=3D3609426615657631821 nparity=3D1 metaslab_array=3D23 metaslab_shift=3D36 ashift=3D9 asize=3D8001576501248 is_log=3D0 children[0] type=3D'disk' id=3D0 guid=3D13380697164729542683 path=3D'/dev/ad4' whole_disk=3D0 children[1] type=3D'disk' id=3D1 guid=3D10325582179578809077 path=3D'/dev/ad6' whole_disk=3D0 children[2] type=3D'disk' id=3D2 guid=3D4289142720340552060 path=3D'/dev/ad8' whole_disk=3D0 children[3] type=3D'disk' id=3D3 guid=3D8524377709988872861 path=3D'/dev/ad10' whole_disk=3D0 Won't try to import these, will clear the drives out and start =20 migrating data to my new zraid1. Johan On Mar 9, 2010, at 20:07 , Johan Str=F6m wrote: > Got it working again, by ripping out my WD20EARS bench drives. Some =20= > more background: > > The pool I was testing on was created like this (for 3 disks, ad6, =20 > ad16, ad18): > > gpart create -s gpt adN > gpart add -t freebsd-zfs -b 40 adN > ...for all 3 drives.. > > zpool create bench raidz1 ad6 ad16 ad18 > > The reason for this was to test performance difference between =20 > running straight on disk, and running on a 4k aligned partition (I =20 > was benching and doing experiments on my new WD20EARS 4k sector size =20= > SATA disks). > > Now, it was during bonnie++'ing these drives that the system =20 > paniced. If that has anything at all to do with the panic, I do not =20= > know since I was unable to obtain any dump/output. > > However, after booting again, it failed to import as described =20 > earlier. > I now powered the machine down, and disconnected all the drives in =20 > the above pool (ad6, ad16, ad18 and another amrd2 which was not in =20 > use at the time). > After power on, the import went fine! > > So, where was the problem? I don't really know. Could have been a =20 > bunch of things I guess? Some thoughts: > - ZFS failed to import from the GPT partitions for some reason > - The ZFS labels was borked due to me having booted with invalid /=20 > boot/zfs/zpool.cache (booting from old drive, new/correct loaders & =20= > kernels though) > - Borked ZFS labels due to panic. will try to plug the other disks =20= > back in tomorrow and try to zdb out the labels. > > As a side note, from what I could get out of the testing, zfs on 4k =20= > aligned partitions was not faster. However, for anyone using these =20 > drives and UFS, aligning does make for a pretty nice improvement! =20 > Some details can be found here: > http://www.stromnet.se/~johan/bonnie-wd20ears-align-test.html (for =20 > UFS, compare first two tables, unaligned vs aligned, on for example =20= > Sequential output; block K/Sec and the latency). > Disclaimer: this was not very scientific and I'm not sure if I'm =20 > interpreting the results correctly, but it would seem that the =20 > average values are 77-80MiB/s vs avg 55MiB/s, and latency sub 1000ms =20= > vs avg 2500ms. Exactly what that means I'm not sure but it IS =20 > better. :) > =46rom what I've understood regarding zfs and 4k drivs, since ZFS =20 > internally works with 512b blocks, it doesnt matter if I try to =20 > align anyway, until the underlying system can report 4k sectors =20 > properly (including getting these frikkin drives to do that, which =20 > they currently don't). > > Anyhow, besides the big side note, if anyone cares to try to =20 > reproduce.. Feel free :) I'll get back with zdb dumps on the other =20 > drives later on. > > Thanks for ZFS and freebsd :) > Johan > > > On Mar 9, 2010, at 17:22 , Johan Str=F6m wrote: > >> Some followup, pjd tried to help me on IRC, without getting much =20 >> further on the actual problem. >> Some more information: >> >> Output from DDB PS: http://www.stromnet.se/~johan/back-1-ddb-ps.txt >> Output from alltrace: = http://www.stromnet.se/~johan/back-1-ddb-alltrace.txt >> >> db> show alllocks >> Process 2738 (sshd) thread 0xffffff0050418000 (100178) >> Process 12 (intr) thread 0xffffff0002407ab0 (100018) >> db> show lockedvnods >> Locked vnodes >> db > >> >> >> Also tried accessing the disks (to make sure interrupts etc where =20 >> alive) ,using dd if=3D/dev/adXX of=3D/dev/null count=3D1 on every = disk on =20 >> the system, works fine. >> Rebooted the system and tried with zfs debugging enabled too, and =20 >> doing zpool import tank instead of just zpool import. no =20 >> difference. Hanging there now. >> >> Johan >> >> >> On Mar 9, 2010, at 15:26 , Johan Str=F6m wrote: >> >>> Hi List! >>> >>> I'm in the tedious process of upgrading my pool on my FreeBSD 8.0 =20= >>> box (with a pool from 7.x). Yesterday I pulled a few disks from my =20= >>> mirrored pool (one pool with multiple mirrors), in order to be =20 >>> able to (free ports) plug in new ones and build my brand new pool. >>> After removing disks, the pool got into state DEGRADED ofcourse, =20 >>> since some of the disks where gone, but data was still there. I =20 >>> did zpool detach on the removed disks, and the pool was ONLINE =20 >>> again, all fine! >>> Then I rebooted to make another disk available through my old LSI =20= >>> MegaRaid card (didnt take time to figure out cryptic syntax of the =20= >>> megarc CLI util). On boot, I was met by something similar to this: >>> >>> pool: tank >>> state: UNAVAIL >>> status: One or more devices could not be used because the label is =20= >>> missing >>> or invalid. There are insufficient replicas for the pool to =20= >>> continue >>> functioning. >>> action: Destroy and re-create the pool from a backup source. >>> see: http://www.sun.com/msg/ZFS-8000-5E >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> tank UNAVAIL 0 0 0 insufficient replicas >>> ad10s1d ONLINE 0 0 0 >>> mirror DEGRADED 0 0 0 >>> ad12 FAULTED 0 0 0 corrupted data >>> ad16 UNAVAIL 0 0 0 corrupted data >>> mirror DEGRADED 0 0 0 >>> ad20 FAULTED 0 0 0 corrupted data >>> ad18 UNAVAIL 0 0 0 corrupted data >>> mirror UNAVAIL 0 0 0 insufficient replicas >>> ad6 UNAVAIL 0 0 0 corrupted data >>> ad4 FAULTED 0 0 0 corrupted data >>> mirror ONLINE 0 0 0 >>> amrd0 ONLINE 0 0 0 >>> amrd1 ONLINE 0 0 0 >>> >>> The reason the disks was still there was due to me haveing an old =20= >>> zfs.cache file in the boot (it boots from anothre drive, but thats =20= >>> another story..) >>> >>> In this case, a export/import did the trick, after reimporting, =20 >>> the pool was back online. All fine. I created a new pool and did =20 >>> some bonnie++ testing on it, and suddenly the box paniced or =20 >>> something (I didn't have dumpon enabled :/ and didn't see the =20 >>> screen until it rebooted).. >>> >>> Now my problems come. The box came up again, with the above =20 >>> output. I tried zpool export again, fine. Zpool import however, =20 >>> hang. Waited an hour, nothing. After rebuilding the kernel with =20 >>> DDB/witness and did the import again, i've managed to get this =20 >>> output (im not really sure what is usable here): >>> >>> back-1 # zpool import >>> load: 0.29 cmd: zpool 3193 [zio->io_cv)] 2.54r 0.00u 0.01s 0% 2236k >>> >>> >>> >>> In DDB: >>> > tr 3193 >>> >>> Tracing pid 3193 tid 100122 td 0xffffff00035ca390 >>> sched_switch() at sched_switch+0xde >>> mi_switch() at mi_switch+0x170 >>> sleepq_wait() at sleepq_wait+0x44 >>> _cv_wait() at _cv_wait+0x13c >>> zio_wait() at zio_wait+0x61 >>> arc_read_nolock() at arc_read_nolock+0x345 >>> dmu_objset_open_impl() at dmu_objset_open_impl+0xd0 >>> dsl_pool_open() at dsl_pool_open+0x5a >>> spa_load() at spa_load+0x31b >>> spa_tryimport() at spa_tryimport+0xa9 >>> zfs_ioc_pool_tryimport() at zfs_ioc_pool_tryimport+0x3f >>> zfsdev_ioctl() at zfsdev_ioctl+0x8d >>> devfs_ioctl_f() at devfs_ioctl_f+0x76 >>> kern_ioctl() at kern_ioctl+0xf6 >>> ioctl() at ioctl+0xfd >>> syscall() at syscall+0x19e >>> Xfast_syscall() at Xfast_syscall+0xe1 >>> --- syscall (54, FreeBSD ELF64, ioctl), rip =3D 0x8010eb86c, rsp =3D = =20 >>> 0x7fffffff8e28, rbp =3D 0x801323300 --- >>> >>> >>> db> show thread 100122 >>> >>> Thread 100122 at 0xffffff00035ca390: >>> proc (pid 3193): 0xffffff000383c460 >>> name: zpool >>> stack: 0xffffff805740a000-0xffffff805740dfff >>> flags: 0x44 pflags: 0x10000 >>> state: INHIBITED: {SLEEPING} >>> wmesg: zio->io_cv) wchan: 0xffffff00506e5858 >>> priority: 131 >>> container lock: sleepq chain (0xffffffff80c61e68) >>> db> >>> >>> db> show sleepchain 10012 >>> >>> db> show sleepchain 3193 >>> >>> thread 100122 (pid 3193, zpool) sleeping on 0xffffff00506e5858 =20 >>> "zio->io_cv)" >>> db> show lock 0xffffff00506e5858 >>> >>> class: spin mutex >>> name: zio->io_cv) >>> flags: {SPIN} >>> state: {OWNED} >>> >>> >>> >>> >>> The box is currenlty in this state, so if you reply to me now I =20 >>> can continue to do debugging according to instructions. >>> Since I cannot get anywhere on my own with this, all and any help =20= >>> is appreciated since I really need this pool to get online.. >>> >>> dmesg is posted below (including some LOR?) >>> >>> Thanks! >>> Johan >>> >>> >>> >>> DMESG: >>> >>> Copyright (c) 1992-2009 The FreeBSD Project. >>> Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, =20 >>> 1993, 1994 >>> The Regents of the University of California. All rights =20 >>> reserved. >>> FreeBSD is a registered trademark of The FreeBSD Foundation. >>> FreeBSD 8.0-RELEASE-p2 #10: Tue Mar 9 12:44:15 CET 2010 >>> johan@back-1.stromnet.se:/usr/obj/usr/src/sys/BACK1 >>> WARNING: WITNESS option enabled, expect reduced performance. >>> Timecounter "i8254" frequency 1193182 Hz quality 0 >>> CPU: Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz (2666.68-MHz =20= >>> K8-class CPU) >>> Origin =3D "GenuineIntel" Id =3D 0x6fb Stepping =3D 11 >>> Features=20 >>> =3D=20 >>> 0xbfebfbff=20 >>> <=20 >>> FPU=20 >>> ,VME=20 >>> ,DE=20 >>> ,PSE=20 >>> ,TSC=20 >>> ,MSR=20 >>> ,PAE=20 >>> ,MCE=20 >>> ,CX8=20 >>> ,APIC=20 >>> ,SEP=20 >>> ,MTRR=20 >>> ,PGE=20 >>> ,MCA=20 >>> ,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE> >>> Features2=20 >>> =3D0xe3fd= >>> AMD Features=3D0x20100800 >>> AMD Features2=3D0x1 >>> TSC: P-state invariant >>> real memory =3D 2147483648 (2048 MB) >>> avail memory =3D 2040631296 (1946 MB) >>> ACPI APIC Table: >>> FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs >>> FreeBSD/SMP: 1 package(s) x 2 core(s) >>> cpu0 (BSP): APIC ID: 0 >>> cpu1 (AP): APIC ID: 1 >>> ioapic0: Changing APIC ID to 2 >>> ioapic0 irqs 0-23 on motherboard >>> kbd1 at kbdmux0 >>> cryptosoft0: on motherboard >>> acpi0: on motherboard >>> acpi0: [ITHREAD] >>> acpi0: Power Button (fixed) >>> acpi0: reservation of 0, a0000 (3) failed >>> acpi0: reservation of 100000, 7f4e0000 (3) failed >>> Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000 >>> acpi_timer0: <24-bit timer at 3.579545MHz> port 0x408-0x40b on acpi0 >>> acpi_hpet0: iomem =20 >>> 0xfed00000-0xfed003ff on acpi0 >>> Timecounter "HPET" frequency 14318180 Hz quality 900 >>> acpi_button0: on acpi0 >>> pcib0: port 0xcf8-0xcff on acpi0 >>> pci0: on pcib0 >>> vgapci0: port 0xe000-0xe007 mem =20 >>> 0xe6300000-0xe637ffff,0xd0000000-0xdfffffff,0xe6000000-0xe60fffff =20= >>> irq 16 at device 2.0 on pci0 >>> agp0: on vgapci0 >>> agp0: detected 7164k stolen memory >>> agp0: aperture size is 256M >>> uhci0: port 0xe100-0xe11f irq =20= >>> 16 at device 26.0 on pci0 >>> uhci0: [ITHREAD] >>> uhci0: LegSup =3D 0x2f00 >>> usbus0: on uhci0 >>> uhci1: port 0xe200-0xe21f irq =20= >>> 21 at device 26.1 on pci0 >>> uhci1: [ITHREAD] >>> uhci1: LegSup =3D 0x2f00 >>> usbus1: on uhci1 >>> uhci2: port 0xe600-0xe61f irq =20= >>> 18 at device 26.2 on pci0 >>> uhci2: [ITHREAD] >>> uhci2: LegSup =3D 0x2f00 >>> usbus2: on uhci2 >>> ehci0: mem =20 >>> 0xe6384000-0xe63843ff irq 18 at device 26.7 on pci0 >>> ehci0: [ITHREAD] >>> usbus3: EHCI version 1.0 >>> usbus3: on ehci0 >>> hdac0: mem =20 >>> 0xe6380000-0xe6383fff irq 22 at device 27.0 on pci0 >>> hdac0: HDA Driver Revision: 20090624_0136 >>> hdac0: [ITHREAD] >>> pcib1: irq 16 at device 28.0 on pci0 >>> pci1: on pcib1 >>> pcib2: irq 18 at device 28.2 on pci0 >>> pci2: on pcib2 >>> em0: port =20 >>> 0xa000-0xa01f mem 0xe1020000-0xe103ffff,0xe1000000-0xe101ffff irq =20= >>> 18 at device 0.0 on pci2 >>> em0: Using MSI interrupt >>> em0: [FILTER] >>> em0: Ethernet address: 00:1b:21:05:00:b4 >>> pcib3: irq 19 at device 28.3 on pci0 >>> pci3: on pcib3 >>> atapci0: port =20 >>> 0xb000=20 >>> -0xb007,0xb100-0xb103,0xb200-0xb207,0xb300-0xb303,0xb400-0xb40f =20 >>> mem 0xe6100000-0xe6101fff irq 19 at device 0.0 on pci3 >>> atapci0: [ITHREAD] >>> atapci0: AHCI called from vendor specific driver >>> atapci0: AHCI v1.00 controller with 2 3Gbps ports, PM supported >>> ata2: on atapci0 >>> ata2: [ITHREAD] >>> ata3: on atapci0 >>> ata3: [ITHREAD] >>> ata4: on atapci0 >>> ata4: [ITHREAD] >>> pcib4: irq 16 at device 28.4 on pci0 >>> pci4: on pcib4 >>> re0: >> 8111CP/8111DP PCIe Gigabit Ethernet> port 0xc000-0xc0ff mem =20 >>> 0xe3000000-0xe3000fff irq 16 at device 0.0 on pci4 >>> re0: Using 1 MSI messages >>> re0: Chip rev. 0x38000000 >>> re0: MAC rev. 0x00000000 >>> miibus0: on re0 >>> rgephy0: PHY 1 on miibus0 >>> rgephy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, =20 >>> 1000baseT, 1000baseT-FDX, auto >>> re0: Ethernet address: 00:1a:4d:5a:97:87 >>> re0: [FILTER] >>> uhci3: port 0xe300-0xe31f irq =20= >>> 23 at device 29.0 on pci0 >>> uhci3: [ITHREAD] >>> uhci3: LegSup =3D 0x2f00 >>> usbus4: on uhci3 >>> uhci4: port 0xe400-0xe41f irq =20= >>> 19 at device 29.1 on pci0 >>> uhci4: [ITHREAD] >>> uhci4: LegSup =3D 0x2f00 >>> usbus5: on uhci4 >>> uhci5: port 0xe500-0xe51f irq =20= >>> 18 at device 29.2 on pci0 >>> uhci5: [ITHREAD] >>> uhci5: LegSup =3D 0x2f00 >>> usbus6: on uhci5 >>> ehci1: mem =20 >>> 0xe6385000-0xe63853ff irq 23 at device 29.7 on pci0 >>> ehci1: [ITHREAD] >>> usbus7: EHCI version 1.0 >>> usbus7: on ehci1 >>> pcib5: at device 30.0 on pci0 >>> pci5: on pcib5 >>> amr0: mem 0xe6200000-0xe620ffff irq 20 at =20= >>> device 0.0 on pci5 >>> amr0: Using 64-bit DMA >>> amr0: [ITHREAD] >>> amr0: delete logical drives supported by controller >>> amr0: Firmware 713S, BIOS G121, =20 >>> 64MB RAM >>> xl0: <3Com 3c905C-TX Fast Etherlink XL> port 0xd000-0xd07f mem =20 >>> 0xe5004000-0xe500407f irq 19 at device 1.0 on pci5 >>> miibus1: on xl0 >>> xlphy0: <3c905C 10/100 internal PHY> PHY 24 on miibus1 >>> xlphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto >>> xl0: Ethernet address: 00:04:76:ef:c6:36 >>> xl0: [ITHREAD] >>> skc0: port 0xd100-0xd1ff mem =20= >>> 0xe5000000-0xe5003fff irq 18 at device 2.0 on pci5 >>> pci0:5:2:0: invalid VPD data, remain 0xfc >>> skc0: SysKonnect SK-NET Gigabit Ethernet Adapter SK-9843 SX rev. =20 >>> (0x0) >>> sk0: on skc0 >>> sk0: Ethernet address: 00:00:5a:98:43:68 >>> miibus2: on sk0 >>> xmphy0: PHY 0 on miibus2 >>> xmphy0: 1000baseSX, 1000baseSX-FDX, auto >>> skc0: [ITHREAD] >>> isab0: at device 31.0 on pci0 >>> isa0: on isab0 >>> atapci1: port =20 >>> 0xe700=20 >>> -0xe707,0xe800-0xe803,0xe900-0xe907,0xea00-0xea03,0xeb00-0xeb1f =20 >>> mem 0xe6386000-0xe63867ff irq 19 at device 31.2 on pci0 >>> atapci1: [ITHREAD] >>> atapci1: AHCI called from vendor specific driver >>> atapci1: AHCI v1.20 controller with 6 3Gbps ports, PM supported >>> ata5: on atapci1 >>> ata5: [ITHREAD] >>> ata6: on atapci1 >>> ata6: [ITHREAD] >>> ata7: on atapci1 >>> ata7: [ITHREAD] >>> ata8: on atapci1 >>> ata8: [ITHREAD] >>> ata9: on atapci1 >>> ata9: [ITHREAD] >>> ata10: on atapci1 >>> ata10: [ITHREAD] >>> pci0: at device 31.3 (no driver attached) >>> atrtc0: port 0x70-0x73 on acpi0 >>> fdc0: port 0x3f0-0x3f5,0x3f7 irq 6 drq 2 =20= >>> on acpi0 >>> fdc0: [FILTER] >>> uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on =20= >>> acpi0 >>> uart0: [FILTER] >>> ppc0: port 0x378-0x37f irq 7 on acpi0 >>> ppc0: Generic chipset (NIBBLE-only) in COMPATIBLE mode >>> ppc0: [ITHREAD] >>> ppbus0: on ppc0 >>> plip0: on ppbus0 >>> plip0: [ITHREAD] >>> lpt0: on ppbus0 >>> lpt0: [ITHREAD] >>> lpt0: Interrupt-driven port >>> ppi0: on ppbus0 >>> atkbdc0: port 0x60,0x64 irq 1 on acpi0 >>> atkbd0: irq 1 on atkbdc0 >>> kbd0 at atkbd0 >>> atkbd0: [GIANT-LOCKED] >>> atkbd0: [ITHREAD] >>> cpu0: on acpi0 >>> est0: on cpu0 >>> est: CPU supports Enhanced Speedstep, but is not recognized. >>> est: cpu_vendor GenuineIntel, msr 82a082a0600082a >>> device_attach: est0 attach returned 6 >>> p4tcc0: on cpu0 >>> cpu1: on acpi0 >>> est1: on cpu1 >>> est: CPU supports Enhanced Speedstep, but is not recognized. >>> est: cpu_vendor GenuineIntel, msr 82a082a0600082a >>> device_attach: est1 attach returned 6 >>> p4tcc1: on cpu1 >>> orm0: at iomem 0xcc000-0xcc7ff,0xcd000-0xcefff =20 >>> on isa0 >>> sc0: at flags 0x100 on isa0 >>> sc0: VGA <16 virtual consoles, flags=3D0x300> >>> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff =20= >>> on isa0 >>> ZFS NOTICE: Prefetch is disabled by default if less than 4GB of =20 >>> RAM is present; >>> to enable, add "vfs.zfs.prefetch_disable=3D0" to /boot/=20 >>> loader.conf. >>> ZFS filesystem version 13 >>> ZFS storage pool version 13 >>> Timecounters tick every 1.000 msec >>> IPsec: Initialized Security Association Processing. >>> usbus0: 12Mbps Full Speed USB v1.0 >>> usbus1: 12Mbps Full Speed USB v1.0 >>> usbus2: 12Mbps Full Speed USB v1.0 >>> usbus3: 480Mbps High Speed USB v2.0 >>> usbus4: 12Mbps Full Speed USB v1.0 >>> usbus5: 12Mbps Full Speed USB v1.0 >>> usbus6: 12Mbps Full Speed USB v1.0 >>> usbus7: 480Mbps High Speed USB v2.0 >>> ad4: 476938MB at ata2-master SATA300 >>> ugen0.1: at usbus0 >>> uhub0: on =20= >>> usbus0 >>> ugen1.1: at usbus1 >>> uhub1: on =20= >>> usbus1 >>> ugen2.1: at usbus2 >>> uhub2: on =20= >>> usbus2 >>> ugen3.1: at usbus3 >>> uhub3: on =20= >>> usbus3 >>> ugen4.1: at usbus4 >>> uhub4: on =20= >>> usbus4 >>> ugen5.1: at usbus5 >>> uhub5: on =20= >>> usbus5 >>> ugen6.1: at usbus6 >>> uhub6: on =20= >>> usbus6 >>> ugen7.1: at usbus7 >>> uhub7: on =20= >>> usbus7 >>> ad6: 1907729MB at ata3-master SATA300 >>> uhub0: 2 ports with 2 removable, self powered >>> uhub1: 2 ports with 2 removable, self powered >>> uhub2: 2 ports with 2 removable, self powered >>> uhub4: 2 ports with 2 removable, self powered >>> uhub5: 2 ports with 2 removable, self powered >>> uhub6: 2 ports with 2 removable, self powered >>> ad10: 715404MB at ata5-master =20 >>> SATA300 >>> ad12: 305245MB at ata6-master SATA150 >>> ad14: 715404MB at ata7-master =20 >>> SATA300 >>> ad16: 1907729MB at ata8-master =20 >>> SATA300 >>> GEOM_MIRROR: Device mirror/gm1a launched (1/1). >>> GEOM_MIRROR: Device mirror/gm1b launched (1/1). >>> ad18: 1907729MB at ata9-master =20 >>> SATA300 >>> GEOM_MIRROR: Device mirror/swap launched (1/1). >>> ad20: 286187MB at ata10-master SATA150 >>> hdac0: HDA Codec #2: Realtek ALC885 >>> pcm0: at cad 2 nid 1 on hdac0 >>> pcm1: at cad 2 nid 1 on hdac0 >>> pcm2: at cad 2 nid 1 on hdac0 >>> pcm3: at cad 2 nid 1 on hdac0 >>> pcm4: at cad 2 nid 1 on hdac0 >>> pcm5: at cad 2 nid 1 on hdac0 >>> amr0: delete logical drives supported by controller >>> amrd0: on amr0 >>> amrd0: 476935MB (976762880 sectors) RAID 0 (optimal) >>> amrd1: on amr0 >>> amrd1: 476935MB (976762880 sectors) RAID 0 (optimal) >>> amrd2: on amr0 >>> amrd2: 1907724MB (3907018752 sectors) RAID 0 (optimal) >>> SMP: AP CPU #1 Launched! >>> WARNING: WITNESS option enabled, expect reduced performance. >>> Root mount waiting for: usbus7 usbus3 >>> uhub3: 6 ports with 6 removable, self powered >>> uhub7: 6 ports with 6 removable, self powered >>> Trying to mount root from zfs:zroot >>> ugen0.2: at usbus0 >>> ugen1.2: at usbus1 >>> uma_zalloc_arg: zone "256" with the following non-sleepable locks =20= >>> held: >>> exclusive rw ifnet_rw (ifnet_rw) r =3D 0 (0xffffffff80e01f60) locked = =20 >>> @ /usr/src/sys/net/if.c:402 >>> KDB: stack backtrace: >>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2a >>> _witness_debugger() at _witness_debugger+0x2c >>> witness_warn() at witness_warn+0x2c2 >>> uma_zalloc_arg() at uma_zalloc_arg+0x29d >>> malloc() at malloc+0x5d >>> if_grow() at if_grow+0x2f >>> if_alloc() at if_alloc+0x2b3 >>> gif_clone_create() at gif_clone_create+0x53 >>> ifc_simple_create() at ifc_simple_create+0x89 >>> if_clone_createif() at if_clone_createif+0x64 >>> ifioctl() at ifioctl+0x6b5 >>> kern_ioctl() at kern_ioctl+0xf6 >>> ioctl() at ioctl+0xfd >>> syscall() at syscall+0x19e >>> Xfast_syscall() at Xfast_syscall+0xe1 >>> --- syscall (54, FreeBSD ELF64, ioctl), rip =3D 0x800b8286c, rsp =3D = =20 >>> 0x7fffffffe4a8, rbp =3D 0x7fffffffef6e --- >>> lock order reversal: >>> 1st 0xffffffff80c093e0 pf task mtx (pf task mtx) @ /usr/src/sys/=20 >>> contrib/pf/net/pf_ioctl.c:1393 >>> 2nd 0xffffffff80e01f60 ifnet_rw (ifnet_rw) @ /usr/src/sys/net/if.c:=20= >>> 2034 >>> KDB: stack backtrace: >>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2a >>> _witness_debugger() at _witness_debugger+0x2c >>> witness_checkorder() at witness_checkorder+0x66f >>> _rw_rlock() at _rw_rlock+0x29 >>> ifunit() at ifunit+0x22 >>> pfioctl() at pfioctl+0x262a >>> devfs_ioctl_f() at devfs_ioctl_f+0x76 >>> kern_ioctl() at kern_ioctl+0xf6 >>> ioctl() at ioctl+0xfd >>> syscall() at syscall+0x19e >>> Xfast_syscall() at Xfast_syscall+0xe1 >>> --- syscall (54, FreeBSD ELF64, ioctl), rip =3D 0x80099886c, rsp =3D = =20 >>> 0x7fffffffdb68, rbp =3D 0x7fffffffdc20 --- >>> lock order reversal: >>> 1st 0xffffff00500c3098 zfs (zfs) @ /usr/src/sys/kern/vfs_mount.c:=20 >>> 1054 >>> 2nd 0xffffff005010f448 devfs (devfs) @ /usr/src/sys/kern/=20 >>> vfs_subr.c:2083 >>> KDB: stack backtrace: >>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2a >>> _witness_debugger() at _witness_debugger+0x2c >>> witness_checkorder() at witness_checkorder+0x66f >>> __lockmgr_args() at __lockmgr_args+0x475 >>> vop_stdlock() at vop_stdlock+0x39 >>> VOP_LOCK1_APV() at VOP_LOCK1_APV+0x46 >>> _vn_lock() at _vn_lock+0x47 >>> vget() at vget+0x56 >>> devfs_allocv() at devfs_allocv+0x103 >>> devfs_root() at devfs_root+0x48 >>> vfs_donmount() at vfs_donmount+0xf43 >>> nmount() at nmount+0x63 >>> syscall() at syscall+0x19e >>> Xfast_syscall() at Xfast_syscall+0xe1 >>> --- syscall (378, FreeBSD ELF64, nmount), rip =3D 0x8007b04dc, rsp =3D= =20 >>> 0x7fffffffdd28, rbp =3D 0x800a04048 --- >>> n >>> >>> r >>> tun0: link state changed to UP >>> lock order reversal: >>> 1st 0xffffff00500c3098 zfs (zfs) @ /usr/src/sys/kern/vfs_mount.c:=20 >>> 1200 >>> 2nd 0xffffff005010f270 syncer (syncer) @ /usr/src/sys/kern/=20 >>> vfs_subr.c:2188 >>> KDB: stack backtrace: >>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2a >>> _witness_debugger() at _witness_debugger+0x2c >>> witness_checkorder() at witness_checkorder+0x66f >>> __lockmgr_args() at __lockmgr_args+0x475 >>> vop_stdlock() at vop_stdlock+0x39 >>> VOP_LOCK1_APV() at VOP_LOCK1_APV+0x46 >>> _vn_lock() at _vn_lock+0x47 >>> vrele() at vrele+0xc3 >>> dounmount() at dounmount+0x269 >>> unmount() at unmount+0x27e >>> syscall() at syscall+0x19e >>> Xfast_syscall() at Xfast_syscall+0xe1 >>> --- syscall (22, FreeBSD ELF64, unmount), rip =3D 0x8006a09bc, rsp =3D= =20 >>> 0x7fffffffde18, rbp =3D 0 --- >>> KDB: enter: manual escape to debugger >>> >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-=20 >>> unsubscribe@freebsd.org" >> >