From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 11:07:09 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6EF51106564A for ; Mon, 9 Jul 2012 11:07:09 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 576178FC22 for ; Mon, 9 Jul 2012 11:07:09 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q69B7997075408 for ; Mon, 9 Jul 2012 11:07:09 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q69B787V075406 for freebsd-fs@FreeBSD.org; Mon, 9 Jul 2012 11:07:08 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 9 Jul 2012 11:07:08 GMT Message-Id: <201207091107.q69B787V075406@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 11:07:09 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167066 fs [zfs] ZVOLs not appearing in /dev/zvol o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165923 fs [nfs] Writing to NFS-backed mmapped files fails if flu o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162362 fs [snapshots] [panic] ufs with snapshot(s) panics when g o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161897 fs [zfs] [patch] zfs partition probing causing long delay o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 280 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 12:44:22 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 22FBF1065674 for ; Mon, 9 Jul 2012 12:44:22 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) by mx1.freebsd.org (Postfix) with ESMTP id CF14F8FC16 for ; Mon, 9 Jul 2012 12:44:21 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1SoDK2-0003i4-Eu for freebsd-fs@freebsd.org; Mon, 09 Jul 2012 14:44:14 +0200 Received: from dyn1197-156.wlan.ic.ac.uk ([129.31.197.156]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 09 Jul 2012 14:44:14 +0200 Received: from johannes by dyn1197-156.wlan.ic.ac.uk with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 09 Jul 2012 14:44:14 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Johannes Totz Date: Mon, 09 Jul 2012 13:44:03 +0100 Lines: 42 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: dyn1197-156.wlan.ic.ac.uk User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20120614 Thunderbird/13.0.1 Subject: zfs send glitch X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 12:44:22 -0000 Hi, zfs send with verbose flag fails for some reason, whereas omitting the verbose flag works (beware of line breaks): # zfs send -vRI @120203-2320 backup/alexs-imac/120607-0056@120607-0056 | zfs receive -vun panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320 send from @120203-2320 to backup/alexs-imac/120607-0056@120603-2311 estimated size is 32.8G send from @120603-2311 to backup/alexs-imac/120607-0056@120607-0056 estimated size is 8.56G total estimated size is 41.3G cannot hold 'backup/alexs-imac/120607-0056@120203-2320': pool must be upgraded WARNING: could not send backup/alexs-imac/120607-0056@120607-0056: incremental source (backup/alexs-imac/120607-0056@120203-2320) does not exist And now without verbose flag: # zfs send -RI @120203-2320 backup/alexs-imac/120607-0056@120607-0056 | zfs receive -vu panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320 receiving incremental stream of backup/alexs-imac/120607-0056@120603-2311 into panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320@120603-2311 This is on: FreeBSD XXX 9.0-STABLE FreeBSD 9.0-STABLE #1 r237006: Wed Jun 13 17:06:56 BST 2012 root@XXX:/usr/obj/usr/src/sys/GENERIC amd64 Is this a known glitch? I remember -v used to work fine on this machine (before I updated it last month). Johannes From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 15:38:24 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1F1951065673; Mon, 9 Jul 2012 15:38:24 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from bigwig.baldwin.cx (bigknife-pt.tunnel.tserv9.chi1.ipv6.he.net [IPv6:2001:470:1f10:75::2]) by mx1.freebsd.org (Postfix) with ESMTP id D3E568FC18; Mon, 9 Jul 2012 15:38:23 +0000 (UTC) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 461F2B9AC; Mon, 9 Jul 2012 11:38:23 -0400 (EDT) From: John Baldwin To: freebsd-fs@freebsd.org Date: Mon, 9 Jul 2012 11:38:15 -0400 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p17; KDE/4.5.5; amd64; ; ) References: <201203071318.08241.jhb@freebsd.org> <201203161406.27549.jhb@freebsd.org> <201206060817.54684.jhb@freebsd.org> In-Reply-To: <201206060817.54684.jhb@freebsd.org> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201207091138.15655.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Mon, 09 Jul 2012 11:38:23 -0400 (EDT) Cc: pho@freebsd.org, Konstantin Belousov Subject: Re: close() of an flock'd file is not atomic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 15:38:24 -0000 On Wednesday, June 06, 2012 8:17:54 am John Baldwin wrote: > On Friday, March 16, 2012 2:06:27 pm John Baldwin wrote: > > On Friday, March 09, 2012 10:59:29 am John Baldwin wrote: > > > On Thursday, March 08, 2012 5:39:19 pm Konstantin Belousov wrote: > > > > On Thu, Mar 08, 2012 at 03:39:07PM -0500, John Baldwin wrote: > > > > > On Wednesday, March 07, 2012 1:18:07 pm John Baldwin wrote: > > > > > > So I ran into this problem at work. Suppose you have a process that opens a > > > > > > read-write file descriptor with O_EXLOCK (so it has an flock()). It then > > > > > > writes out a binary into that file. Another process wants to execve() the > > > > > > file when it is ready, so it opens the file with O_EXLOCK (or O_SHLOCK), and > > > > > > will call execve() once it has locked the file. In theory, what should happen > > > > > > is that the second process should wait until the first process has finished > > > > > > and called close(). In practice what happens is that I occasionally see the > > > > > > second process fail with ETXTBUSY. > > > > > > > > > > > > The bug is that the vn_closefile() does the VOP_ADVLOCK() to unlock the file > > > > > > separately from the call to vn_close() which drops the writecount. Thus, the > > > > > > second process can do an open() and flock() of the file and subsequently call > > > > > > execve() after the first process has done the VOP_ADVLOCK(), but before it > > > > > > calls into vn_close(). In fact, since vn_close() requires a write lock on the > > > > > > vnode, this turns out to not be too hard to reproduce at all. Below is a > > > > > > simple test program that reproduces this constantly. To use, copy /bin/test > > > > > > to some other file (e.g. /tmp/foo) and make it writable (chmod a+w), then run > > > > > > ./flock_close_race /tmp/foo. > > > > > > > > > > > > The "fix" I came up with is to defer calling VOP_ADVLOCK() to release the lock > > > > > > until after vn_close() executes. However, even with that fix applied, my test > > > > > > case still fails. Now it is because open() with a given lock flag is > > > > > > non-atomic in that the open(O_RDWR) will call vn_open() and bump v_writecount > > > > > > before it blocks on the lock due to O_EXLOCK, so even though the 'exec_child' > > > > > > process has the fd locked, the writecount can still be bumped. One gross hack > > > > > > would be to defer the bump of the writecount to the caller of vn_open() if the > > > > > > caller passes in O_EXLOCK or O_SHLOCK, but that's a really gross kludge, plus > > > > > > it doesn't actually work. I ended up moving acquiring the lock into > > > > > > vn_open_cred(). The current patch I'm testing has both of these approaches, > > > > > > but the first one is #if 0'd out, and the second is #if 1'd. > > > > > > > > > > > > http://www.freebsd.org/~jhb/patches/flock_open_close.patch > > > > > > > > > > Based on some feedback from Konstantin, I've fixed some issues in the failure > > > > > path handling for VOP_ADVLOCK(). I've also removed the #if 0'd code mentioned > > > > > above, so the patch is now the actual change that I'm testing. So far it > > > > > handles both my workload at work and my test program without any issues. > > > > > > > > I think a comment is needed for a reason to call vn_writechk() second time. > > > > > > Fixed. > > > > > > > Could you, please, point me, where the FHASLOCK is set for O_EXLOCK | O_SHLOCK > > > > case in the patched kernel ? > > > > > > It wasn't. :( I wonder how this was even working since close shouldn't have > > > been unlocking. I'll need to do some more testing. BTW, I ran into fhopen() > > > and found that I would need to put all this same logic into that, so I've split > > > the common code from fhopen() and vn_open_cred() into a new vn_open_vnode(). > > > I think in general it improves both sets of code. > > > > > > I'll upate the patch once I've done some more testing. > > Based on feedback from Konstantin, I have split the vn_open_vnode() changes > out into a separate patch. Once that patch is in the tree I will revisit > this and update the actual bug-fix patch. > > The vn_open_vnode() patch is at > http://www.freebsd.org/~jhb/patches/vn_open_vnode.patch > > I tested it by doing a buildworld -j 32 in a loop while NFS exporting the > /usr/obj tree to another machine that did a continual find | xargs md5 loop > over the /usr/obj tree. This survived overnight. Here now is the tested version of the actual fix after the vn_open_vnode() changes were committed. This is hopefully easier to parse now. http://www.FreeBSD.org/~jhb/patches/flock_open_close4.patch I'm enclosing an updated copy of the test program below: #include #include #include #include #include #include #include #include #include static void usage(void) { fprintf(stderr, "Usage: flock_close_race [args]\n"); exit(1); } static void child(const char *binary) { int fd; /* Exit as soon as our parent exits. */ while (getppid() != 1) { fd = open(binary, O_RDWR | O_EXLOCK); if (fd < 0) { /* * This may get ETXTBSY since exit() will * close its open fd's (thus releasing the * lock), before it releases the vmspace (and * mapping of the binary). */ if (errno == ETXTBSY) continue; err(1, "can't open %s", binary); } close(fd); } exit(0); } static void exec_child(char **av) { int fd; fd = open(av[0], O_RDONLY | O_SHLOCK); execv(av[0], av); err(127, "execv"); } int main(int ac, char **av) { struct stat sb; pid_t pid; if (ac < 2) usage(); if (stat(av[1], &sb) != 0) err(1, "stat(%s)", av[1]); if (!S_ISREG(sb.st_mode)) errx(1, "%s not an executable", av[1]); pid = fork(); if (pid < 0) err(1, "fork"); if (pid == 0) child(av[1]); for (;;) { pid = fork(); if (pid < 0) err(1, "vfork"); if (pid == 0) exec_child(av + 1); wait(NULL); } return (0); } -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 17:06:54 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A6B751065670 for ; Mon, 9 Jul 2012 17:06:54 +0000 (UTC) (envelope-from prvs=15377d4663=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 318758FC08 for ; Mon, 9 Jul 2012 17:06:54 +0000 (UTC) X-Spam-Processed: mail1.multiplay.co.uk, Mon, 09 Jul 2012 18:06:34 +0100 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50020676146.msg for ; Mon, 09 Jul 2012 18:06:34 +0100 X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=15377d4663=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: From: "Steven Hartland" To: , "Johannes Totz" References: Date: Mon, 9 Jul 2012 18:07:37 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: Subject: Re: zfs send glitch X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 17:06:54 -0000 ----- Original Message ----- From: "Johannes Totz" > zfs send with verbose flag fails for some reason, whereas omitting the > verbose flag works (beware of line breaks): > > # zfs send -vRI @120203-2320 backup/alexs-imac/120607-0056@120607-0056 | > zfs receive -vun panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320 > > send from @120203-2320 to backup/alexs-imac/120607-0056@120603-2311 > estimated size is 32.8G > send from @120603-2311 to backup/alexs-imac/120607-0056@120607-0056 > estimated size is 8.56G > total estimated size is 41.3G > cannot hold 'backup/alexs-imac/120607-0056@120203-2320': pool must be > upgraded > WARNING: could not send backup/alexs-imac/120607-0056@120607-0056: > incremental source (backup/alexs-imac/120607-0056@120203-2320) does not > exist > > And now without verbose flag: > > # zfs send -RI @120203-2320 backup/alexs-imac/120607-0056@120607-0056 | > zfs receive -vu panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320 > receiving incremental stream of > backup/alexs-imac/120607-0056@120603-2311 into > panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320@120603-2311 > Are you sure its the verbose flag which breaks it or does it work on the second run that works, as we've seen very strange behaviour with send receive recently but I've not had time to sit down and confirm exactly what's happening. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 19:35:30 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6FE581065676 for ; Mon, 9 Jul 2012 19:35:30 +0000 (UTC) (envelope-from ayoung@mosaicarchive.com) Received: from mail-ob0-f182.google.com (mail-ob0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 33FC88FC0C for ; Mon, 9 Jul 2012 19:35:30 +0000 (UTC) Received: by obbun3 with SMTP id un3so1155515obb.13 for ; Mon, 09 Jul 2012 12:35:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=sRl66MragIwrmBkDDaU/pmarFU7V+53o5TZ/2pIPV6w=; b=ZzDMYY1y74tP29aXWHzLpBYYfpX5+MxQKQ+yXAS0RGALCBJ3i0ECEDA+33KFApTRMj YrCWWVVC3dvcxfxNhtaHvlrqko9yK9PSKXgzq7Sk9v9d1b/u48w8NN8qUvp/Vf0aI9Ck aA/Qky3+HJjPs7mGHM6hIxfp61yMa8yXH997iGgUQ6wiRkzrpxFUIo6wkPhEr4qIcCaL +nDYLkepuTO+SxnVOiRyv6kKzfnJscM5ezoY7nXPrWTRMK9eQla3/Y5+gf7y0EJMwNfj 0aRAFS3CFPZlp17zCGIbY/P4tmOoP8PpeCRvPCLR3C266hHE5tKL9FN3RWrk6q5vXBK9 1yqg== MIME-Version: 1.0 Received: by 10.182.169.40 with SMTP id ab8mr29126210obc.34.1341862529587; Mon, 09 Jul 2012 12:35:29 -0700 (PDT) Received: by 10.76.79.165 with HTTP; Mon, 9 Jul 2012 12:35:28 -0700 (PDT) X-Originating-IP: [69.84.144.6] Date: Mon, 9 Jul 2012 15:35:28 -0400 Message-ID: From: Andy Young To: freebsd-fs@freebsd.org X-Gm-Message-State: ALoCoQnoAK89AkTPjX0t7O0u/LKGR+88w1nXZFoqC+MV16aX4TKlMp+alEhHbZkDJA2Oyw/MDMjF Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Recreating a ZFS pool from existing disks? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 19:35:30 -0000 One of our servers has a hard drive that contains the OS and then a set of 24 drives that were organized into two ZFS pools. I replaced the system drive this morning assuming, perhaps misguidedly, that I could easily recreate the two ZFS pools from the 24 drives. (You could do this pretty easily with RAID6 in Linux as I recall) When I looked closer at zpool however, its not obvious how to do this. I tried using zpool create but it complained that the drives were already part of another pool. Is there a way to recreate a zpool directly from the disks? Thanks! -- Andrew Young From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 19:37:12 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 664501065679 for ; Mon, 9 Jul 2012 19:37:12 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-pb0-f54.google.com (mail-pb0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id 35A878FC15 for ; Mon, 9 Jul 2012 19:37:12 +0000 (UTC) Received: by pbbro2 with SMTP id ro2so21109273pbb.13 for ; Mon, 09 Jul 2012 12:37:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=L41vWSsZSNxXzpFR9T9emwZLn61sFe0qErsCpYbkyUI=; b=KjXY1quoRP40Zr5ACAdLmbTJ5Eb0ELmu+aiFQuG2olFLlqDK3+yCP4DenkllEzCSyz Ysyp8Qs+d5uI0A8LoM0T+coTnUiPieJJZ8g5IOsef++7g/vjOOKs7vMLUjF4SEBnYPTw u0lBLXzmb4LXF4UgBUEjbuuWdWsWkI0nvqdyD38zLuL64nkKD+/yPEOzE1U4uS9qRrfs Ljj4fklSMVG4+ekSqEfjlt0WRz2aAHW6sv2XNk1T5amQESpNLvFN84nQCmYBRrV0HeX4 3rSXOFWIFNMxeiL1ckMCPjTaMP7kjgv72fNAOkkUwIUXC1uSkCR4ObcwrdbmCqF2AIkO mOsA== Received: by 10.68.242.7 with SMTP id wm7mr62520188pbc.98.1341862631854; Mon, 09 Jul 2012 12:37:11 -0700 (PDT) Received: from sexy.corp.trumpet.io ([207.86.77.58]) by mx.google.com with ESMTPS id ob9sm28261545pbb.28.2012.07.09.12.36.59 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 09 Jul 2012 12:37:02 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1278) Content-Type: text/plain; charset=iso-8859-1 From: Steven Schlansker In-Reply-To: Date: Mon, 9 Jul 2012 12:37:00 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <949F13D2-F847-4AF4-AC2F-EACD3B735CC8@gmail.com> References: To: Andy Young X-Mailer: Apple Mail (2.1278) Cc: freebsd-fs@freebsd.org Subject: Re: Recreating a ZFS pool from existing disks? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 19:37:12 -0000 Take a look at "zpool import" Without arguments, it will list the importable pools. With a pool name = or id, it will add the offline pool to the system and bring it online. On Jul 9, 2012, at 12:35 PM, Andy Young wrote: > One of our servers has a hard drive that contains the OS and then a = set of > 24 drives that were organized into two ZFS pools. I replaced the = system > drive this morning assuming, perhaps misguidedly, that I could easily > recreate the two ZFS pools from the 24 drives. (You could do this = pretty > easily with RAID6 in Linux as I recall) When I looked closer at zpool > however, its not obvious how to do this. I tried using zpool create = but it > complained that the drives were already part of another pool. >=20 > Is there a way to recreate a zpool directly from the disks? >=20 > Thanks! >=20 > --=20 > Andrew Young > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 19:57:55 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AA0A1106564A for ; Mon, 9 Jul 2012 19:57:55 +0000 (UTC) (envelope-from ayoung@mosaicarchive.com) Received: from mail-ob0-f182.google.com (mail-ob0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 691C58FC16 for ; Mon, 9 Jul 2012 19:57:55 +0000 (UTC) Received: by obbun3 with SMTP id un3so1187992obb.13 for ; Mon, 09 Jul 2012 12:57:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:date :message-id:subject:from:to:cc:content-type:x-gm-message-state; bh=ubb8hH4ZWEE8vdkA1W1q08gMwqnstGSWJFp4rPuQeZ8=; b=Om+/EMEzKMS/mUhflixi696PxcYvbdOyzDmwqZhB11MhDflgOMMY37G2aOkUwDPU7P c5BCD1/5TZhtxC7smhtd4ufl0AqLZO+EhxZno8MJ3NcaXiaOwGe0CcrWGhPw+3H6Qr5O efLp0JOCZdgRAuRnT96QfoinKqtAjqS82tLF5MhontBnwMTw7cefX1Ng8YLcqnN2uNWQ 2WoWzvv6Tk57YbF4i7YENarqsOOHtufZqUHmKLEcBaxRtqR3C9R4av8fOAj48DqTBnLp S1X0ZJKANrMltrS3b0n5pBz7fkbQkEjEdL8F7TtWLq7Qj2KmenhffdkPxoGY+hbx6UQx /UQA== MIME-Version: 1.0 Received: by 10.60.171.174 with SMTP id av14mr42911609oec.61.1341863874857; Mon, 09 Jul 2012 12:57:54 -0700 (PDT) Received: by 10.76.79.165 with HTTP; Mon, 9 Jul 2012 12:57:54 -0700 (PDT) X-Originating-IP: [69.84.144.6] In-Reply-To: <949F13D2-F847-4AF4-AC2F-EACD3B735CC8@gmail.com> References: <949F13D2-F847-4AF4-AC2F-EACD3B735CC8@gmail.com> Date: Mon, 9 Jul 2012 15:57:54 -0400 Message-ID: From: Andy Young To: Steven Schlansker X-Gm-Message-State: ALoCoQnlysYgyniGgtG4hWh3XD7MeljaEPHT5n3tA0eLkrRJ4mIuJsON5MTPXSFzUqL+rv3YzTUa Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: Recreating a ZFS pool from existing disks? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 19:57:55 -0000 Thanks Steven! That was way too easy. I avoided import, assuming I had to have exported it to begin with. Andy On Mon, Jul 9, 2012 at 3:37 PM, Steven Schlansker < stevenschlansker@gmail.com> wrote: > Take a look at "zpool import" > > Without arguments, it will list the importable pools. With a pool name or > id, it will add the offline pool to the system and bring it online. > > On Jul 9, 2012, at 12:35 PM, Andy Young wrote: > > > One of our servers has a hard drive that contains the OS and then a set > of > > 24 drives that were organized into two ZFS pools. I replaced the system > > drive this morning assuming, perhaps misguidedly, that I could easily > > recreate the two ZFS pools from the 24 drives. (You could do this pretty > > easily with RAID6 in Linux as I recall) When I looked closer at zpool > > however, its not obvious how to do this. I tried using zpool create but > it > > complained that the drives were already part of another pool. > > > > Is there a way to recreate a zpool directly from the disks? > > > > Thanks! > > > > -- > > Andrew Young > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > -- Andrew Young Mosaic Storage Systems, Inc http://www.mosaicarchive.com/ Follow us on: Twitter , Facebook , Google Plus , Pinterest From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 19:58:20 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BAB1B1065677 for ; Mon, 9 Jul 2012 19:58:20 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm36-vm6.bullet.mail.ne1.yahoo.com (nm36-vm6.bullet.mail.ne1.yahoo.com [98.138.229.118]) by mx1.freebsd.org (Postfix) with SMTP id 72FC08FC14 for ; Mon, 9 Jul 2012 19:58:20 +0000 (UTC) Received: from [98.138.90.51] by nm36.bullet.mail.ne1.yahoo.com with NNFMP; 09 Jul 2012 19:58:14 -0000 Received: from [98.138.89.194] by tm4.bullet.mail.ne1.yahoo.com with NNFMP; 09 Jul 2012 19:58:14 -0000 Received: from [127.0.0.1] by omp1052.mail.ne1.yahoo.com with NNFMP; 09 Jul 2012 19:58:14 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 464503.16463.bm@omp1052.mail.ne1.yahoo.com Received: (qmail 99150 invoked by uid 60001); 9 Jul 2012 19:58:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1341863894; bh=sorSVByBZ/9jDoWKUCrdV1SuDV/GDafRhmhiW52fA40=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding; b=2xsFjC0jrbqMKqmyexPJTWakmkmgGz8ZjDqKCFT05G/JQAe4xvLYW3ZFHqD91/WsHrGPwwBJ5UYVJkfC0uJYdcIa+hjp+Uk5Fm7ZugZZBe2JpEThVCUq/GnoK44ERbTiMV0+9K5WjbYJggT7E3s20a6bnOoEhnRKSGqd7bmHiuM= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding; b=Z45Md+/i1jL7h+4O+BYCY5BN2BGBif4QzXFMVbuTjrpDIIBUcQ7YiaqLF28+V9JzEQcutfmq3Jl9k1igbT3vA9l3SfsTD/WObiKzZe9lV18AENYIBIub62mk/TvnptRlAdOyoCoOcLFV5UBAutWTNrJhBQBYwuS9MIojLoYZ3MA=; X-YMail-OSG: Hjr0voAVM1kSI3ZE4RG9RgxLRW4ex_tzY2bF3T3nX3xn_XV QFX.BK5P3OOp0c16431araxX1PQAgFb39kqD_KCimGlcEK9h_BCXHUR9Kdlz 4BjFCUQ48dfMyr_mmt8qGUGbLRYHBW5FoT_kixjR9d4wcJvfUoNOMv3JN0eD Ul3.uWC6RUJGh766ce2Ipk.33XC.ddrpR0Gfk8OR7U84VC0PAU0jIeyZ.BE2 J471O0Lg1peo1X3_nzmFyBVlwBVQdzyXGntunC800zgo5fVZwZGTu5OcepyQ wM6WTqZlk7N7wl5da0.RJnnVOcr3f5SdFbLylec60Xr4mypFs1gs2E0By9w9 HOGVp6.d0WUUln1l12KiCzYbYJplGVA36_.C7RFOem1e_GD1TT340_bV6KcR b Received: from [66.41.240.246] by web122501.mail.ne1.yahoo.com via HTTP; Mon, 09 Jul 2012 12:58:14 PDT X-Mailer: YahooMailClassic/15.0.8 YahooMailWebService/0.8.120.356233 Message-ID: <1341863894.36655.YahooMailClassic@web122501.mail.ne1.yahoo.com> Date: Mon, 9 Jul 2012 12:58:14 -0700 (PDT) From: Jason Usher To: Zaphod Beeblebrox MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: vdev/pool math with combined raidzX vdevs... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 19:58:20 -0000 Hello again,=0A=0A=0A--- On Fri, 7/6/12, Zaphod Beeblebrox wrote:=0A=0A=0A> ... so, again with simplistic assumptions,=0A> =0A> p(= 36drz3 --- 12 drives, 3 groups) =3D p(12drz3) * 3=0A> =0A> A "vanilla" RAID= -Z2 (if I make an assumption to what you're=0A> saying) is:=0A> =0A> p(36dr= z2) =3D 36 * p(f) * 35 * p(f)=0A> =0A> ... but I can't directly answer you = question without knowing=0A> a) the=0A> structure of the RAID-Z2 array and = p(f).=A0 If we use a=0A> 1% figure for=0A> p(f), then P(36drz3,12,3) =3D 0.= 035% and p(36drz2) =3D 4.3%=0A=0A=0A(snip)=0A=0A=0A> Put simply, you add th= e probabilities of things where any=0A> can cause=0A> the failure (either d= rive of R0 failing, any one of the 3=0A> plexes of a=0A> complex array fail= ing) and you multiply things where all=0A> must fail to=0A> produce failure= .=0A=0A=0AOk. So let's start with those numbers from that hardforum link I= posted:=0A=0A(probability of data loss during a rebuild)=0A=0ARAID-10: =0A= F =3D 5%=0A=0ARAID-Z1:=0A1 - (1 - F)^(9 - 1) =3D 33.7%=0AF=3D 33.7%=0A=0ARA= ID-Z2:=0A1 - (1 - F)^(10 - 1) - (10 - 1) F (1 - F)^(10 - 2) =3D 7.1%=0AF=3D= 7.1%=0A=0ARAID-Z3:=0A1 - (1 - F)^(11 - 1) - (11 - 1) F (1 - F)^(11 - 2) - (= 11 - 1)(11 - 2) F^2 (1 - F)^(11 - 3) / 2=0AF =3D 1.15%=0A=0AAgain, doesn't = really matter what F is, since we are only interested in the comparison...= =0A=0AFrom what you said, above, striping 3 different raidz3 arrays togethe= r into one pool is ADDITIVE ... so the 1.15% rises to 3.45%.=0A=0AYes ?=0A= =0ASo we triple our risk by running all three raidz3 arrays in one pool, bu= t we still have less than half the risk of a single raidz2 vdev (with no st= riping) which is 7.1%.=0A=0AAm I on the right track here ? I think I'm mis= sing something because with one raidz3, I have a 1.15% chance of "losing a = drive during rebuild" but I am thinking about competely healthy arrays who = have a larger chance of blowing up because ONE OF THE OTHER vdevs blows fou= r drives simultaneously. =0A=0ASo I am really comparing 0% probability (if= they aren't combined in a zpool, I can take one vdev out and run over it w= ith a train and the other vdev is unharmed) with X% probability, because no= w something happening in the other vdev can ruin the healthy one...=0A=0AAm= I really the only person worrying about the interactive failure properties= of combining vdevs into a pool ? From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 20:13:17 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3F3D11065675 for ; Mon, 9 Jul 2012 20:13:17 +0000 (UTC) (envelope-from freebsd@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id D59018FC22 for ; Mon, 9 Jul 2012 20:13:16 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q69KD7uH010060 for ; Mon, 9 Jul 2012 13:13:07 -0700 (PDT) (envelope-from freebsd@pki2.com) From: Dennis Glatting To: freebsd-fs@freebsd.org Date: Mon, 09 Jul 2012 13:13:07 -0700 Message-ID: <1341864787.32803.43.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q69KD7uH010060 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@pki2.com Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 20:13:17 -0000 I have a ZFS array of disks where the system simply stops as if forever blocked by some IO mutex. This happens often and the following is the output of top: last pid: 6075; load averages: 0.00, 0.00, 0.00 up 0+16:54:41 13:04:10 135 processes: 1 running, 134 sleeping CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Mem: 47M Active, 24M Inact, 18G Wired, 120M Buf, 44G Free Swap: 32G Total, 32G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 2410 root 1 33 0 11992K 2820K zio->i 7 331:25 0.00% bzip2 2621 root 1 52 4 28640K 5544K tx->tx 24 245:33 0.00% john 2624 root 1 48 4 28640K 5544K tx->tx 4 239:08 0.00% john 2623 root 1 49 4 28640K 5544K tx->tx 7 238:44 0.00% john 2640 root 1 42 4 28640K 5420K tx->tx 23 206:51 0.00% john 2638 root 1 42 4 28640K 5420K tx->tx 28 206:34 0.00% john 2639 root 1 42 4 28640K 5420K tx->tx 9 206:30 0.00% john 2637 root 1 42 4 28640K 5420K tx->tx 18 206:24 0.00% john This system is presently resilvering a disk but these stops have happened before. iirc# zpool status disk-1 pool: disk-1 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sun Jul 8 13:07:46 2012 104G scanned out of 12.4T at 1.73M/s, (scan is slow, no estimated time) 10.3G resilvered, 0.82% done config: NAME STATE READ WRITE CKSUM disk-1 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 da10 ONLINE 0 0 0 da9 ONLINE 0 0 0 da5 ONLINE 0 0 0 da6 ONLINE 0 0 0 da7 ONLINE 0 0 0 replacing-7 DEGRADED 0 0 0 17938531774236227186 UNAVAIL 0 0 0 was /dev/da8 da3 ONLINE 0 0 0 (resilvering) da8 ONLINE 0 0 0 da4 ONLINE 0 0 0 logs ada2p1 ONLINE 0 0 0 cache ada1 ONLINE 0 0 0 errors: No known data errors This system has dissimilar disks, which I understand should not be a problem but the stopping also happened before I started the slow disk upgrade process. The disks are served by: * A LSI 9211 flashed to IT, and * A LSI 2008 controller on the motherboard also flashed to IT. The 2008 BIOS and firmware is the most recent from LSI. The motherboard is a Supermicro H8DG6-F. My question is what should I be looking at and how should I look at it? There is nothing in the logs or the console, rather the system is forever paused and entering commands results in no response (it's as if everything is deadlocked). From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 20:38:17 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 815F7106566B for ; Mon, 9 Jul 2012 20:38:17 +0000 (UTC) (envelope-from dg@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 9381E8FC16 for ; Mon, 9 Jul 2012 20:38:16 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q69Kc5SG010966 for ; Mon, 9 Jul 2012 13:38:05 -0700 (PDT) (envelope-from dg@pki2.com) From: Dennis Glatting To: freebsd-fs@freebsd.org In-Reply-To: <1341864787.32803.43.camel@btw.pki2.com> References: <1341864787.32803.43.camel@btw.pki2.com> Content-Type: text/plain; charset="ISO-8859-1" Date: Mon, 09 Jul 2012 13:38:05 -0700 Message-ID: <1341866285.32803.45.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q69Kc5SG010966 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com Subject: More data (dmesg verbose): ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 20:38:17 -0000 At the end is the dmesg output booted in verbose mode. I also included this: iirc# uname -a FreeBSD iirc 9.0-STABLE FreeBSD 9.0-STABLE #14: Sun Jul 8 16:54:00 PDT 2012 root@iirc:/sys/amd64/compile/SMUNI amd64 On Mon, 2012-07-09 at 13:13 -0700, Dennis Glatting wrote: > I have a ZFS array of disks where the system simply stops as if forever > blocked by some IO mutex. This happens often and the following is the > output of top: > > last pid: 6075; load averages: 0.00, 0.00, 0.00 up 0+16:54:41 > 13:04:10 > 135 processes: 1 running, 134 sleeping > CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle > Mem: 47M Active, 24M Inact, 18G Wired, 120M Buf, 44G Free > Swap: 32G Total, 32G Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > 2410 root 1 33 0 11992K 2820K zio->i 7 331:25 0.00% > bzip2 > 2621 root 1 52 4 28640K 5544K tx->tx 24 245:33 0.00% > john > 2624 root 1 48 4 28640K 5544K tx->tx 4 239:08 0.00% > john > 2623 root 1 49 4 28640K 5544K tx->tx 7 238:44 0.00% > john > 2640 root 1 42 4 28640K 5420K tx->tx 23 206:51 0.00% > john > 2638 root 1 42 4 28640K 5420K tx->tx 28 206:34 0.00% > john > 2639 root 1 42 4 28640K 5420K tx->tx 9 206:30 0.00% > john > 2637 root 1 42 4 28640K 5420K tx->tx 18 206:24 0.00% > john > > > This system is presently resilvering a disk but these stops have > happened before. > > > iirc# zpool status disk-1 > pool: disk-1 > state: DEGRADED > status: One or more devices is currently being resilvered. The pool > will > continue to function, possibly in a degraded state. > action: Wait for the resilver to complete. > scan: resilver in progress since Sun Jul 8 13:07:46 2012 > 104G scanned out of 12.4T at 1.73M/s, (scan is slow, no > estimated time) > 10.3G resilvered, 0.82% done > config: > > NAME STATE READ WRITE CKSUM > disk-1 DEGRADED 0 0 0 > raidz2-0 DEGRADED 0 0 0 > da1 ONLINE 0 0 0 > da2 ONLINE 0 0 0 > da10 ONLINE 0 0 0 > da9 ONLINE 0 0 0 > da5 ONLINE 0 0 0 > da6 ONLINE 0 0 0 > da7 ONLINE 0 0 0 > replacing-7 DEGRADED 0 0 0 > 17938531774236227186 UNAVAIL 0 0 0 was /dev/da8 > da3 ONLINE 0 0 0 (resilvering) > da8 ONLINE 0 0 0 > da4 ONLINE 0 0 0 > logs > ada2p1 ONLINE 0 0 0 > cache > ada1 ONLINE 0 0 0 > > errors: No known data errors > > > This system has dissimilar disks, which I understand should not be a > problem but the stopping also happened before I started the slow disk > upgrade process. > > The disks are served by: > > * A LSI 9211 flashed to IT, and > * A LSI 2008 controller on the motherboard also flashed to IT. > > The 2008 BIOS and firmware is the most recent from LSI. The motherboard > is a Supermicro H8DG6-F. > > > My question is what should I be looking at and how should I look at it? > There is nothing in the logs or the console, rather the system is > forever paused and entering commands results in no response (it's as if > everything is deadlocked). > > > > iirc# dmesg cmdreg=0x0117, statreg=0x02a0, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=11 map[10]: type Memory, range 32, base 0xdfbf7000, size 12, enabled pcib0: allocated type 3 (0xdfbf7000-0xdfbf7fff) for rid 10 of pci0:0:18:0 pcib0: matched entry for 0.18.INTA pcib0: slot 18 INTA hardwired to IRQ 16 ohci early: SMM active, request owner change found-> vendor=0x1002, dev=0x4398, revid=0x00 domain=0, bus=0, slot=18, func=1 class=0c-03-10, hdrtype=0x00, mfdev=0 cmdreg=0x0117, statreg=0x02a0, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=11 map[10]: type Memory, range 32, base 0xdfbf6000, size 12, enabled pcib0: allocated type 3 (0xdfbf6000-0xdfbf6fff) for rid 10 of pci0:0:18:1 pcib0: matched entry for 0.18.INTA pcib0: slot 18 INTA hardwired to IRQ 16 ohci early: SMM active, request owner change found-> vendor=0x1002, dev=0x4396, revid=0x00 domain=0, bus=0, slot=18, func=2 class=0c-03-20, hdrtype=0x00, mfdev=0 cmdreg=0x0102, statreg=0x02b0, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=b, irq=10 powerspec 2 supports D0 D1 D2 D3 current D0 map[10]: type Memory, range 32, base 0xdfbf8800, size 8, enabled pcib0: allocated type 3 (0xdfbf8800-0xdfbf88ff) for rid 10 of pci0:0:18:2 pcib0: matched entry for 0.18.INTB pcib0: slot 18 INTB hardwired to IRQ 17 found-> vendor=0x1002, dev=0x4397, revid=0x00 domain=0, bus=0, slot=19, func=0 class=0c-03-10, hdrtype=0x00, mfdev=1 cmdreg=0x0117, statreg=0x02a0, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=10 map[10]: type Memory, range 32, base 0xdfbfa000, size 12, enabled pcib0: allocated type 3 (0xdfbfa000-0xdfbfafff) for rid 10 of pci0:0:19:0 pcib0: matched entry for 0.19.INTA pcib0: slot 19 INTA hardwired to IRQ 18 ohci early: SMM active, request owner change found-> vendor=0x1002, dev=0x4398, revid=0x00 domain=0, bus=0, slot=19, func=1 class=0c-03-10, hdrtype=0x00, mfdev=0 cmdreg=0x0117, statreg=0x02a0, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=10 map[10]: type Memory, range 32, base 0xdfbf9000, size 12, enabled pcib0: allocated type 3 (0xdfbf9000-0xdfbf9fff) for rid 10 of pci0:0:19:1 pcib0: matched entry for 0.19.INTA pcib0: slot 19 INTA hardwired to IRQ 18 ohci early: SMM active, request owner change found-> vendor=0x1002, dev=0x4396, revid=0x00 domain=0, bus=0, slot=19, func=2 class=0c-03-20, hdrtype=0x00, mfdev=0 cmdreg=0x0102, statreg=0x02b0, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=b, irq=10 powerspec 2 supports D0 D1 D2 D3 current D0 map[10]: type Memory, range 32, base 0xdfbf8c00, size 8, enabled pcib0: allocated type 3 (0xdfbf8c00-0xdfbf8cff) for rid 10 of pci0:0:19:2 pcib0: matched entry for 0.19.INTB pcib0: slot 19 INTB hardwired to IRQ 19 found-> vendor=0x1002, dev=0x4385, revid=0x3d domain=0, bus=0, slot=20, func=0 class=0c-05-00, hdrtype=0x00, mfdev=1 cmdreg=0x0403, statreg=0x0230, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1002, dev=0x439c, revid=0x00 domain=0, bus=0, slot=20, func=1 class=01-01-8a, hdrtype=0x00, mfdev=0 cmdreg=0x0005, statreg=0x0230, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=255 MSI supports 2 messages pcib0: allocated type 4 (0x1f0-0x1f7) for rid 10 of pci0:0:20:1 pcib0: allocated type 4 (0x3f6-0x3f6) for rid 14 of pci0:0:20:1 pcib0: allocated type 4 (0x170-0x177) for rid 18 of pci0:0:20:1 pcib0: allocated type 4 (0x376-0x376) for rid 1c of pci0:0:20:1 map[20]: type I/O Port, range 32, base 0xff00, size 4, enabled pcib0: allocated type 4 (0xff00-0xff0f) for rid 20 of pci0:0:20:1 found-> vendor=0x1002, dev=0x439d, revid=0x00 domain=0, bus=0, slot=20, func=3 class=06-01-00, hdrtype=0x00, mfdev=1 cmdreg=0x000f, statreg=0x0220, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1002, dev=0x4384, revid=0x00 domain=0, bus=0, slot=20, func=4 class=06-04-01, hdrtype=0x01, mfdev=1 cmdreg=0x0107, statreg=0x02a0, cachelnsz=0 (dwords) lattimer=0x40 (1920 ns), mingnt=0x1a (6500 ns), maxlat=0x00 (0 ns) found-> vendor=0x1002, dev=0x4399, revid=0x00 domain=0, bus=0, slot=20, func=5 class=0c-03-10, hdrtype=0x00, mfdev=0 cmdreg=0x0117, statreg=0x02a0, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=c, irq=10 map[10]: type Memory, range 32, base 0xdfbfb000, size 12, enabled pcib0: allocated type 3 (0xdfbfb000-0xdfbfbfff) for rid 10 of pci0:0:20:5 pcib0: matched entry for 0.20.INTC pcib0: slot 20 INTC hardwired to IRQ 18 ohci early: SMM active, request owner change found-> vendor=0x1022, dev=0x1600, revid=0x00 domain=0, bus=0, slot=24, func=0 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1601, revid=0x00 domain=0, bus=0, slot=24, func=1 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1602, revid=0x00 domain=0, bus=0, slot=24, func=2 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1603, revid=0x00 domain=0, bus=0, slot=24, func=3 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1604, revid=0x00 domain=0, bus=0, slot=24, func=4 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1605, revid=0x00 domain=0, bus=0, slot=24, func=5 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1600, revid=0x00 domain=0, bus=0, slot=25, func=0 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1601, revid=0x00 domain=0, bus=0, slot=25, func=1 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1602, revid=0x00 domain=0, bus=0, slot=25, func=2 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1603, revid=0x00 domain=0, bus=0, slot=25, func=3 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1604, revid=0x00 domain=0, bus=0, slot=25, func=4 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1605, revid=0x00 domain=0, bus=0, slot=25, func=5 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1600, revid=0x00 domain=0, bus=0, slot=26, func=0 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1601, revid=0x00 domain=0, bus=0, slot=26, func=1 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1602, revid=0x00 domain=0, bus=0, slot=26, func=2 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1603, revid=0x00 domain=0, bus=0, slot=26, func=3 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1604, revid=0x00 domain=0, bus=0, slot=26, func=4 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1605, revid=0x00 domain=0, bus=0, slot=26, func=5 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1600, revid=0x00 domain=0, bus=0, slot=27, func=0 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1601, revid=0x00 domain=0, bus=0, slot=27, func=1 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1602, revid=0x00 domain=0, bus=0, slot=27, func=2 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1603, revid=0x00 domain=0, bus=0, slot=27, func=3 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1604, revid=0x00 domain=0, bus=0, slot=27, func=4 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) found-> vendor=0x1022, dev=0x1605, revid=0x00 domain=0, bus=0, slot=27, func=5 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0000, statreg=0x0000, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) pci0: at device 0.2 (no driver attached) pcib1: irq 16 at device 4.0 on pci0 pcib0: allocated type 4 (0xe000-0xefff) for rid 1c of pcib1 pcib0: allocated type 3 (0xdff00000-0xdfffffff) for rid 20 of pcib1 pcib1: domain 0 pcib1: secondary bus 5 pcib1: subordinate bus 5 pcib1: I/O decode 0xe000-0xefff pcib1: memory decode 0xdff00000-0xdfffffff pcib1: no prefetched decode pci5: on pcib1 pci5: domain=0, physical bus=5 found-> vendor=0x17d3, dev=0x1880, revid=0x05 domain=0, bus=5, slot=0, func=0 class=01-04-00, hdrtype=0x00, mfdev=0 cmdreg=0x0147, statreg=0x0010, cachelnsz=16 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=11 powerspec 3 supports D0 D3 current D0 MSI supports 1 message, 64 bit MSI-X supports 15 messages in map 0x14 map[10]: type I/O Port, range 32, base 0xe800, size 8, enabled pcib1: allocated I/O port range (0xe800-0xe8ff) for rid 10 of pci0:5:0:0 map[14]: type Memory, range 64, base 0xdff30000, size 16, enabled pcib1: allocated memory range (0xdff30000-0xdff3ffff) for rid 14 of pci0:5:0:0 map[1c]: type Memory, range 64, base 0xdff40000, size 18, enabled pcib1: allocated memory range (0xdff40000-0xdff7ffff) for rid 1c of pci0:5:0:0 pcib1: matched entry for 5.0.INTA pcib1: slot 0 INTA hardwired to IRQ 16 arcmsr0: port 0xe800-0xe8ff mem 0xdff30000-0xdff3ffff,0xdff40000-0xdff7ffff irq 16 at device 0.0 on pci5 ARECA RAID ADAPTER0: Driver Version 1.20.00.22 2011-07-04 ARECA RAID ADAPTER0: FIRMWARE VERSION V1.49 2011-08-02 ioapic0: routing intpin 16 (PCI IRQ 16) to lapic 32 vector 51 pcib2: irq 19 at device 11.0 on pci0 pcib0: allocated type 4 (0xd000-0xdfff) for rid 1c of pcib2 pcib0: allocated type 3 (0xdfe00000-0xdfefffff) for rid 20 of pcib2 pcib2: domain 0 pcib2: secondary bus 4 pcib2: subordinate bus 4 pcib2: I/O decode 0xd000-0xdfff pcib2: memory decode 0xdfe00000-0xdfefffff pcib2: no prefetched decode pci4: on pcib2 pci4: domain=0, physical bus=4 found-> vendor=0x1000, dev=0x0072, revid=0x03 domain=0, bus=4, slot=0, func=0 class=01-07-00, hdrtype=0x00, mfdev=0 cmdreg=0x0147, statreg=0x0010, cachelnsz=16 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=10 powerspec 3 supports D0 D1 D2 D3 current D0 MSI supports 1 message, 64 bit MSI-X supports 15 messages in map 0x14 map[10]: type I/O Port, range 32, base 0xd000, size 8, enabled pcib2: allocated I/O port range (0xd000-0xd0ff) for rid 10 of pci0:4:0:0 map[14]: type Memory, range 64, base 0xdfe3c000, size 14, enabled pcib2: allocated memory range (0xdfe3c000-0xdfe3ffff) for rid 14 of pci0:4:0:0 map[1c]: type Memory, range 64, base 0xdfe40000, size 18, enabled pcib2: allocated memory range (0xdfe40000-0xdfe7ffff) for rid 1c of pci0:4:0:0 pcib2: matched entry for 4.0.INTA pcib2: slot 0 INTA hardwired to IRQ 19 mps0: port 0xd000-0xd0ff mem 0xdfe3c000-0xdfe3ffff,0xdfe40000-0xdfe7ffff irq 19 at device 0.0 on pci4 mps0: Firmware: 13.00.57.00, Driver: 14.00.00.01-fbsd mps0: IOCCapabilities: 1285c mps0: attempting to allocate 1 MSI-X vectors (15 supported) msi: routing MSI-X IRQ 256 to local APIC 32 vector 52 mps0: using IRQ 256 for MSI-X pcib3: irq 16 at device 12.0 on pci0 pcib0: allocated type 4 (0xc000-0xcfff) for rid 1c of pcib3 pcib0: allocated type 3 (0xdfd00000-0xdfdfffff) for rid 20 of pcib3 pcib3: domain 0 pcib3: secondary bus 3 pcib3: subordinate bus 3 pcib3: I/O decode 0xc000-0xcfff pcib3: memory decode 0xdfd00000-0xdfdfffff pcib3: no prefetched decode pci3: on pcib3 pci3: domain=0, physical bus=3 found-> vendor=0x1000, dev=0x0072, revid=0x03 domain=0, bus=3, slot=0, func=0 class=01-07-00, hdrtype=0x00, mfdev=0 cmdreg=0x0147, statreg=0x0010, cachelnsz=16 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=11 powerspec 3 supports D0 D1 D2 D3 current D0 MSI supports 1 message, 64 bit MSI-X supports 15 messages in map 0x14 map[10]: type I/O Port, range 32, base 0xc000, size 8, enabled pcib3: allocated I/O port range (0xc000-0xc0ff) for rid 10 of pci0:3:0:0 map[14]: type Memory, range 64, base 0xdfd3c000, size 14, enabled pcib3: allocated memory range (0xdfd3c000-0xdfd3ffff) for rid 14 of pci0:3:0:0 map[1c]: type Memory, range 64, base 0xdfd40000, size 18, enabled pcib3: allocated memory range (0xdfd40000-0xdfd7ffff) for rid 1c of pci0:3:0:0 pcib3: matched entry for 3.0.INTA pcib3: slot 0 INTA hardwired to IRQ 16 mps1: port 0xc000-0xc0ff mem 0xdfd3c000-0xdfd3ffff,0xdfd40000-0xdfd7ffff irq 16 at device 0.0 on pci3 mps1: Firmware: 13.00.57.00, Driver: 14.00.00.01-fbsd mps1: IOCCapabilities: 1285c mps1: attempting to allocate 1 MSI-X vectors (15 supported) msi: routing MSI-X IRQ 257 to local APIC 32 vector 53 mps1: using IRQ 257 for MSI-X pcib4: irq 17 at device 13.0 on pci0 pcib0: allocated type 4 (0xb000-0xbfff) for rid 1c of pcib4 pcib0: allocated type 3 (0xdfc00000-0xdfcfffff) for rid 20 of pcib4 pcib4: domain 0 pcib4: secondary bus 2 pcib4: subordinate bus 2 pcib4: I/O decode 0xb000-0xbfff pcib4: memory decode 0xdfc00000-0xdfcfffff pcib4: no prefetched decode pci2: on pcib4 pci2: domain=0, physical bus=2 found-> vendor=0x8086, dev=0x10c9, revid=0x01 domain=0, bus=2, slot=0, func=0 class=02-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0147, statreg=0x0010, cachelnsz=16 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=10 powerspec 3 supports D0 D3 current D0 MSI supports 1 message, 64 bit, vector masks MSI-X supports 10 messages in map 0x1c map[10]: type Memory, range 32, base 0xdfce0000, size 17, enabled pcib4: allocated memory range (0xdfce0000-0xdfcfffff) for rid 10 of pci0:2:0:0 map[14]: type Memory, range 32, base 0xdfcc0000, size 17, enabled pcib4: allocated memory range (0xdfcc0000-0xdfcdffff) for rid 14 of pci0:2:0:0 map[18]: type I/O Port, range 32, base 0xb800, size 5, enabled pcib4: allocated I/O port range (0xb800-0xb81f) for rid 18 of pci0:2:0:0 map[1c]: type Memory, range 32, base 0xdfc9c000, size 14, enabled pcib4: allocated memory range (0xdfc9c000-0xdfc9ffff) for rid 1c of pci0:2:0:0 pcib4: matched entry for 2.0.INTA pcib4: slot 0 INTA hardwired to IRQ 17 found-> vendor=0x8086, dev=0x10c9, revid=0x01 domain=0, bus=2, slot=0, func=1 class=02-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0147, statreg=0x0010, cachelnsz=16 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=b, irq=10 powerspec 3 supports D0 D3 current D0 MSI supports 1 message, 64 bit, vector masks MSI-X supports 10 messages in map 0x1c map[10]: type Memory, range 32, base 0xdfc60000, size 17, enabled pcib4: allocated memory range (0xdfc60000-0xdfc7ffff) for rid 10 of pci0:2:0:1 map[14]: type Memory, range 32, base 0xdfc40000, size 17, enabled pcib4: allocated memory range (0xdfc40000-0xdfc5ffff) for rid 14 of pci0:2:0:1 map[18]: type I/O Port, range 32, base 0xb400, size 5, enabled pcib4: allocated I/O port range (0xb400-0xb41f) for rid 18 of pci0:2:0:1 map[1c]: type Memory, range 32, base 0xdfc1c000, size 14, enabled pcib4: allocated memory range (0xdfc1c000-0xdfc1ffff) for rid 1c of pci0:2:0:1 pcib4: matched entry for 2.0.INTB pcib4: slot 0 INTB hardwired to IRQ 18 igb0: port 0xb800-0xb81f mem 0xdfce0000-0xdfcfffff,0xdfcc0000-0xdfcdffff,0xdfc9c000-0xdfc9ffff irq 17 at device 0.0 on pci2 igb0: attempting to allocate 9 MSI-X vectors (10 supported) msi: routing MSI-X IRQ 258 to local APIC 32 vector 54 msi: routing MSI-X IRQ 259 to local APIC 32 vector 55 msi: routing MSI-X IRQ 260 to local APIC 32 vector 56 msi: routing MSI-X IRQ 261 to local APIC 32 vector 57 msi: routing MSI-X IRQ 262 to local APIC 32 vector 58 msi: routing MSI-X IRQ 263 to local APIC 32 vector 59 msi: routing MSI-X IRQ 264 to local APIC 32 vector 60 msi: routing MSI-X IRQ 265 to local APIC 32 vector 61 msi: routing MSI-X IRQ 266 to local APIC 32 vector 62 igb0: using IRQs 258-266 for MSI-X igb0: Using MSIX interrupts with 9 vectors igb0: bpf attached igb0: Ethernet address: 00:25:90:71:04:d8 igb0: Bound queue 0 to cpu 0 igb0: Bound queue 1 to cpu 1 igb0: Bound queue 2 to cpu 2 igb0: Bound queue 3 to cpu 3 igb0: Bound queue 4 to cpu 4 igb0: Bound queue 5 to cpu 5 igb0: Bound queue 6 to cpu 6 igb0: Bound queue 7 to cpu 7 igb1: port 0xb400-0xb41f mem 0xdfc60000-0xdfc7ffff,0xdfc40000-0xdfc5ffff,0xdfc1c000-0xdfc1ffff irq 18 at device 0.1 on pci2 igb1: attempting to allocate 9 MSI-X vectors (10 supported) msi: routing MSI-X IRQ 267 to local APIC 32 vector 63 msi: routing MSI-X IRQ 268 to local APIC 32 vector 64 msi: routing MSI-X IRQ 269 to local APIC 32 vector 65 msi: routing MSI-X IRQ 270 to local APIC 32 vector 66 msi: routing MSI-X IRQ 271 to local APIC 32 vector 67 msi: routing MSI-X IRQ 272 to local APIC 32 vector 68 msi: routing MSI-X IRQ 273 to local APIC 32 vector 69 msi: routing MSI-X IRQ 274 to local APIC 32 vector 70 msi: routing MSI-X IRQ 275 to local APIC 32 vector 71 igb1: using IRQs 267-275 for MSI-X igb1: Using MSIX interrupts with 9 vectors igb1: bpf attached igb1: Ethernet address: 00:25:90:71:04:d9 igb1: Bound queue 0 to cpu 8 igb1: Bound queue 1 to cpu 9 igb1: Bound queue 2 to cpu 10 igb1: Bound queue 3 to cpu 11 igb1: Bound queue 4 to cpu 12 igb1: Bound queue 5 to cpu 13 igb1: Bound queue 6 to cpu 14 igb1: Bound queue 7 to cpu 15 ahci0: port 0xa000-0xa007,0x9000-0x9003,0x8000-0x8007,0x7000-0x7003,0x6000-0x600f mem 0xdfbf8400-0xdfbf87ff irq 22 at device 17.0 on pci0 ioapic0: routing intpin 22 (PCI IRQ 22) to lapic 32 vector 72 ahci0: AHCI v1.10 with 4 3Gbps ports, Port Multiplier supported ahci0: Caps: 64bit NCQ SNTF MPS ALP AL CLO 3Gbps PM PMD SSC PSC 32cmd CCC 4ports ahci0: Caps2: ahcich0: at channel 0 on ahci0 ahcich0: Caps: ahcich1: at channel 1 on ahci0 ahcich1: Caps: ahcich2: at channel 2 on ahci0 ahcich2: Caps: ahcich3: at channel 3 on ahci0 ahcich3: Caps: ohci0: mem 0xdfbf7000-0xdfbf7fff irq 16 at device 18.0 on pci0 usbus0 on ohci0 usbus0: bpf attached ohci0: usbpf: Attached ohci1: mem 0xdfbf6000-0xdfbf6fff irq 16 at device 18.1 on pci0 usbus1 on ohci1 usbus1: bpf attached ohci1: usbpf: Attached ehci0: mem 0xdfbf8800-0xdfbf88ff irq 17 at device 18.2 on pci0 ioapic0: routing intpin 17 (PCI IRQ 17) to lapic 32 vector 73 ehci0: Dropped interrupts workaround enabled usbus2: EHCI version 1.0 usbus2 on ehci0 usbus2: bpf attached ehci0: usbpf: Attached ohci2: mem 0xdfbfa000-0xdfbfafff irq 18 at device 19.0 on pci0 ioapic0: routing intpin 18 (PCI IRQ 18) to lapic 32 vector 74 usbus3 on ohci2 usbus3: bpf attached ohci2: usbpf: Attached ohci3: mem 0xdfbf9000-0xdfbf9fff irq 18 at device 19.1 on pci0 usbus4 on ohci3 usbus4: bpf attached ohci3: usbpf: Attached ehci1: mem 0xdfbf8c00-0xdfbf8cff irq 19 at device 19.2 on pci0 ioapic0: routing intpin 19 (PCI IRQ 19) to lapic 32 vector 75 ehci1: Dropped interrupts workaround enabled usbus5: EHCI version 1.0 usbus5 on ehci1 usbus5: bpf attached ehci1: usbpf: Attached pci0: at device 20.0 (no driver attached) atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xff00-0xff0f at device 20.1 on pci0 atapci0: SATA controller enabled (combined mode, primary channel) ata0: at channel 0 on atapci0 ioapic0: routing intpin 14 (ISA IRQ 14) to lapic 32 vector 76 ata1: at channel 1 on atapci0 ioapic0: routing intpin 15 (ISA IRQ 15) to lapic 32 vector 77 isab0: at device 20.3 on pci0 isa0: on isab0 pcib5: at device 20.4 on pci0 pcib0: allocated type 3 (0xdef00000-0xdf7fffff) for rid 20 of pcib5 pcib5: failed to allocate initial prefetch window: 0xdd000000-0xddffffff pcib5: domain 0 pcib5: secondary bus 1 pcib5: subordinate bus 1 pcib5: memory decode 0xdef00000-0xdf7fffff pcib5: no prefetched decode pcib5: Subtractively decoded bridge. pci1: on pcib5 pci1: domain=0, physical bus=1 found-> vendor=0x102b, dev=0x0532, revid=0x0a domain=0, bus=1, slot=4, func=0 class=03-00-00, hdrtype=0x00, mfdev=0 cmdreg=0x0007, statreg=0x0290, cachelnsz=16 (dwords) lattimer=0x40 (1920 ns), mingnt=0x10 (4000 ns), maxlat=0x20 (8000 ns) intpin=a, irq=11 powerspec 1 supports D0 D3 current D0 map[10]: type Prefetchable Memory, range 32, base 0xdd000000, size 24, enabled map[14]: type Memory, range 32, base 0xdeffc000, size 14, enabled pcib5: allocated memory range (0xdeffc000-0xdeffffff) for rid 14 of pci0:1:4:0 map[18]: type Memory, range 32, base 0xdf000000, size 23, enabled pcib5: allocated memory range (0xdf000000-0xdf7fffff) for rid 18 of pci0:1:4:0 pcib5: matched entry for 1.4.INTA pcib5: slot 4 INTA hardwired to IRQ 20 vgapci0: mem 0xdeffc000-0xdeffffff,0xdf000000-0xdf7fffff irq 20 at device 4.0 on pci1 ohci4: mem 0xdfbfb000-0xdfbfbfff irq 18 at device 20.5 on pci0 usbus6 on ohci4 usbus6: bpf attached ohci4: usbpf: Attached pcib6: on acpi0 pci64: on pcib6 pci64: domain=0, physical bus=64 found-> vendor=0x1002, dev=0x5a10, revid=0x02 domain=0, bus=64, slot=0, func=0 class=06-00-00, hdrtype=0x00, mfdev=1 cmdreg=0x0002, statreg=0x2010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) MSI supports 4 messages found-> vendor=0x1002, dev=0x5a23, revid=0x00 domain=0, bus=64, slot=0, func=2 class=08-06-00, hdrtype=0x00, mfdev=1 cmdreg=0x0004, statreg=0x0010, cachelnsz=0 (dwords) lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns) intpin=a, irq=11 MSI supports 1 message, 64 bit pcib6: matched entry for 64.0.INTA pcib6: slot 0 INTA hardwired to IRQ 16 pci64: at device 0.2 (no driver attached) acpi_button0: on acpi0 uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0 ioapic0: routing intpin 4 (ISA IRQ 4) to lapic 32 vector 78 uart0: fast interrupt uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0 ioapic0: routing intpin 3 (ISA IRQ 3) to lapic 32 vector 79 uart1: fast interrupt uart2: <16550 or compatible> port 0x3e8-0x3ef irq 7 on acpi0 ioapic0: routing intpin 7 (ISA IRQ 7) to lapic 32 vector 80 uart2: fast interrupt acpi0: wakeup code va 0xffffff918407a000 pa 0x80000 ex_isa_identify() ahc_isa_probe 0: ioport 0xc00 alloc failed ahc_isa_probe 1: ioport 0x1c00 alloc failed ahc_isa_probe 2: ioport 0x2c00 alloc failed ahc_isa_probe 3: ioport 0x3c00 alloc failed ahc_isa_probe 4: ioport 0x4c00 alloc failed ahc_isa_probe 5: ioport 0x5c00 alloc failed ahc_isa_probe 6: ioport 0x6c00 alloc failed ahc_isa_probe 7: ioport 0x7c00 alloc failed ahc_isa_probe 8: ioport 0x8c00 alloc failed ahc_isa_probe 9: ioport 0x9c00 alloc failed ahc_isa_probe 10: ioport 0xac00 alloc failed ahc_isa_probe 11: ioport 0xbc00 alloc failed ahc_isa_probe 12: ioport 0xcc00 alloc failed ahc_isa_probe 13: ioport 0xdc00 alloc failed ahc_isa_probe 14: ioport 0xec00 alloc failed pcib0: allocated type 3 (0xa0000-0xa07ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa0800-0xa0fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa1000-0xa17ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa1800-0xa1fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa2000-0xa27ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa2800-0xa2fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa3000-0xa37ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa3800-0xa3fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa4000-0xa47ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa4800-0xa4fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa5000-0xa57ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa5800-0xa5fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa6000-0xa67ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa6800-0xa6fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa7000-0xa77ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa7800-0xa7fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa8000-0xa87ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa8800-0xa8fff) for rid 0 of orm0 pcib0: allocated type 3 (0xa9000-0xa97ff) for rid 0 of orm0 pcib0: allocated type 3 (0xa9800-0xa9fff) for rid 0 of orm0 pcib0: allocated type 3 (0xaa000-0xaa7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xaa800-0xaafff) for rid 0 of orm0 pcib0: allocated type 3 (0xab000-0xab7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xab800-0xabfff) for rid 0 of orm0 pcib0: allocated type 3 (0xac000-0xac7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xac800-0xacfff) for rid 0 of orm0 pcib0: allocated type 3 (0xad000-0xad7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xad800-0xadfff) for rid 0 of orm0 pcib0: allocated type 3 (0xae000-0xae7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xae800-0xaefff) for rid 0 of orm0 pcib0: allocated type 3 (0xaf000-0xaf7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xaf800-0xaffff) for rid 0 of orm0 pcib0: allocated type 3 (0xb0000-0xb07ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb0800-0xb0fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb1000-0xb17ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb1800-0xb1fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb2000-0xb27ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb2800-0xb2fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb3000-0xb37ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb3800-0xb3fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb4000-0xb47ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb4800-0xb4fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb5000-0xb57ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb5800-0xb5fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb6000-0xb67ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb6800-0xb6fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb7000-0xb77ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb7800-0xb7fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb8000-0xb87ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb8800-0xb8fff) for rid 0 of orm0 pcib0: allocated type 3 (0xb9000-0xb97ff) for rid 0 of orm0 pcib0: allocated type 3 (0xb9800-0xb9fff) for rid 0 of orm0 pcib0: allocated type 3 (0xba000-0xba7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xba800-0xbafff) for rid 0 of orm0 pcib0: allocated type 3 (0xbb000-0xbb7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xbb800-0xbbfff) for rid 0 of orm0 pcib0: allocated type 3 (0xbc000-0xbc7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xbc800-0xbcfff) for rid 0 of orm0 pcib0: allocated type 3 (0xbd000-0xbd7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xbd800-0xbdfff) for rid 0 of orm0 pcib0: allocated type 3 (0xbe000-0xbe7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xbe800-0xbefff) for rid 0 of orm0 pcib0: allocated type 3 (0xbf000-0xbf7ff) for rid 0 of orm0 pcib0: allocated type 3 (0xbf800-0xbffff) for rid 0 of orm0 pcib0: allocated type 3 (0xd0000-0xd07ff) for rid 1 of orm0 pcib0: allocated type 3 (0xd0800-0xd0fff) for rid 1 of orm0 pcib0: allocated type 3 (0xd1000-0xd17ff) for rid 1 of orm0 pcib0: allocated type 3 (0xd1800-0xd1fff) for rid 1 of orm0 pcib0: allocated type 3 (0xd2000-0xd27ff) for rid 1 of orm0 pcib0: allocated type 3 (0xd2800-0xd2fff) for rid 1 of orm0 pcib0: allocated type 3 (0xd3000-0xd37ff) for rid 1 of orm0 pcib0: allocated type 3 (0xd3800-0xd3fff) for rid 1 of orm0 pcib0: allocated type 3 (0xd4000-0xd47ff) for rid 1 of orm0 pcib0: allocated type 3 (0xd4000-0xd4fff) for rid 1 of orm0 pcib0: allocated type 3 (0xd5000-0xd57ff) for rid 2 of orm0 pcib0: allocated type 3 (0xd5800-0xd5fff) for rid 2 of orm0 pcib0: allocated type 3 (0xd6000-0xd67ff) for rid 2 of orm0 pcib0: allocated type 3 (0xd6800-0xd6fff) for rid 2 of orm0 pcib0: allocated type 3 (0xd7000-0xd77ff) for rid 2 of orm0 pcib0: allocated type 3 (0xd7800-0xd7fff) for rid 2 of orm0 pcib0: allocated type 3 (0xd8000-0xd87ff) for rid 2 of orm0 pcib0: allocated type 3 (0xd8800-0xd8fff) for rid 2 of orm0 pcib0: allocated type 3 (0xd9000-0xd97ff) for rid 2 of orm0 pcib0: allocated type 3 (0xd9800-0xd9fff) for rid 2 of orm0 pcib0: allocated type 3 (0xda000-0xda7ff) for rid 2 of orm0 pcib0: allocated type 3 (0xda800-0xdafff) for rid 2 of orm0 pcib0: allocated type 3 (0xdb000-0xdb7ff) for rid 2 of orm0 pcib0: allocated type 3 (0xdb800-0xdbfff) for rid 2 of orm0 pcib0: allocated type 3 (0xdc000-0xdc7ff) for rid 2 of orm0 pcib0: allocated type 3 (0xdc800-0xdcfff) for rid 2 of orm0 pcib0: allocated type 3 (0xdd000-0xdd7ff) for rid 2 of orm0 pcib0: allocated type 3 (0xdd800-0xddfff) for rid 2 of orm0 pcib0: allocated type 3 (0xde000-0xde7ff) for rid 2 of orm0 pcib0: allocated type 3 (0xde800-0xdefff) for rid 2 of orm0 pcib0: allocated type 3 (0xdf000-0xdf7ff) for rid 2 of orm0 pcib0: allocated type 3 (0xdf800-0xdffff) for rid 2 of orm0 isa_probe_children: disabling PnP devices ipmi0: on isa0 ipmi0: KCS mode found at io 0xca2 alignment 0x1 on isa pcib0: allocated type 4 (0xca2-0xca3) for rid 0 of ipmi0 atrtc: atrtc0 already exists; skipping it attimer: attimer0 already exists; skipping it sc: sc0 already exists; skipping it uart: uart0 already exists; skipping it uart: uart1 already exists; skipping it isa_probe_children: probing non-PnP devices orm0: at iomem 0xc0000-0xc7fff,0xd4000-0xd4fff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> sc0: fb0, kbd1, terminal emulator: scteken (teken terminal) vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 pcib0: allocated type 4 (0x3c0-0x3df) for rid 0 of vga0 pcib0: allocated type 3 (0xa0000-0xbffff) for rid 0 of vga0 pcib0: allocated type 4 (0x60-0x60) for rid 0 of atkbdc0 pcib0: allocated type 4 (0x64-0x64) for rid 1 of atkbdc0 atkbdc0: at port 0x60,0x64 on isa0 pcib0: allocated type 4 (0x60-0x60) for rid 0 of atkbdc0 pcib0: allocated type 4 (0x64-0x64) for rid 1 of atkbdc0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 kbd0: atkbd0, generic (0), config:0x0, flags:0x3f0000 ioapic0: routing intpin 1 (ISA IRQ 1) to lapic 32 vector 81 atkbd0: [GIANT-LOCKED] psm0: unable to allocate IRQ pcib0: allocated type 4 (0x3f0-0x3f5) for rid 0 of fdc0 pcib0: allocated type 4 (0x3f7-0x3f7) for rid 1 of fdc0 fdc0 failed to probe at port 0x3f0-0x3f5,0x3f7 irq 6 drq 2 on isa0 ppc0: cannot reserve I/O port range ppc0 failed to probe at irq 7 on isa0 wbwd0 failed to probe on isa0 isa_probe_children: probing PnP devices AcpiOsExecute: failed to enqueue task, consider increasing the debug.acpi.max_tasks tunable acpi_throttle0: on cpu0 acpi_throttle0: P_CNT from P_BLK 0x810 Device configuration finished. procfs registered lapic: Divisor 2, Frequency 100002195 Hz Timecounters tick every 1.000 msec vlan: initialized, using hash tables with chaining lo0: bpf attached hptrr: no controller detected. usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 480Mbps High Speed USB v2.0 usbus3: 12Mbps Full Speed USB v1.0 usbus4: 12Mbps Full Speed USB v1.0 usbus5: 480Mbps High Speed USB v2.0 usbus6: 12Mbps Full Speed USB v1.0 ugen0.1: at usbus0 uhub0: on usbus0 ugen1.1: at usbus1 uhub1: on usbus1 ugen2.1: at usbus2 uhub2: on usbus2 ugen3.1: at usbus3 uhub3: on usbus3 ugen4.1: at usbus4 uhub4: on usbus4 ugen5.1: at usbus5 uhub5: on usbus5 ugen6.1: at usbus6 uhub6: on usbus6 uhub6: 2 ports with 2 removable, self powered uhub0: 3 ports with 3 removable, self powered uhub1: 3 ports with 3 removable, self powered uhub3: 3 ports with 3 removable, self powered uhub4: 3 ports with 3 removable, self powered ahcich0: AHCI reset... ahcich0: SATA connect time=100us status=00000123 ahcich0: AHCI reset: device found ahcich1: AHCI reset... ahcich1: SATA connect time=100us status=00000123 ahcich1: AHCI reset: device found ahcich2: AHCI reset... ahcich2: SATA connect time=100us status=00000123 ahcich2: AHCI reset: device found ahcich3: AHCI reset... ahcich3: SATA connect time=100us status=00000123 ahcich3: AHCI reset: device found ata0: reset tp1 mask=03 ostat0=7f ostat1=7f ahcich0: AHCI reset: device ready after 100ms ahcich1: AHCI reset: device ready after 100ms ahcich2: AHCI reset: device ready after 100ms ahcich3: AHCI reset: device ready after 100ms ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ugen6.2: at usbus6 ums0: on usbus6 ums0: 3 buttons and [Z] coordinates ID=0 ukbd0: on usbus6 kbd2 at ukbd0 kbd2: ukbd0, generic (0), config:0x0, flags:0x3d0000 ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff uhub2: 6 ports with 6 removable, self powered uhub5: 6 ports with 6 removable, self powered ata0: stat0=0x7f err=0xff lsb=0xff msb=0xff ata0: stat1=0x7f err=0xff lsb=0xff msb=0xff ata0: reset tp2 stat0=ff stat1=ff devices=0x0 ata1: reset tp1 mask=03 ostat0=7f ostat1=7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f arcmsr:scsi id=2 lun=0 device lost arcmsr:scsi id=3 lun=0 device lost arcmsr:scsi id=4 lun=0 device lost arcmsr:scsi id=5 lun=0 device lost arcmsr:scsi id=6 lun=0 device lost arcmsr:scsi id=7 lun=0 device lost arcmsr:scsi id=8 lun=0 device lost (probe2:arcmsr0:0:16:1): INQUIRY. CDB: 12 20 0 0 24 0 (probe2:arcmsr0:0:16:1): CAM status: Command timeout (probe2:arcmsr0:0:16:1): Retrying command (probe2:arcmsr0:0:16:1): INQUIRY. CDB: 12 20 0 0 24 0 (probe2:arcmsr0:0:16:1): CAM status: Command timeout (probe2:arcmsr0:0:16:1): Retrying command (probe2:arcmsr0:0:16:1): INQUIRY. CDB: 12 20 0 0 24 0 (probe2:arcmsr0:0:16:1): CAM status: Command timeout (probe2:arcmsr0:0:16:1): Retrying command (probe2:arcmsr0:0:16:1): INQUIRY. CDB: 12 20 0 0 24 0 (probe2:arcmsr0:0:16:1): CAM status: Command timeout (probe2:arcmsr0:0:16:1): Retrying command (probe2:arcmsr0:0:16:1): INQUIRY. CDB: 12 20 0 0 24 0 (probe2:arcmsr0:0:16:1): CAM status: Command timeout (probe2:arcmsr0:0:16:1): Error 5, Retries exhausted (probe2:arcmsr0:0:16:2): INQUIRY. CDB: 12 40 0 0 24 0 (probe2:arcmsr0:0:16:2): CAM status: Command timeout (probe2:arcmsr0:0:16:2): Retrying command (probe2:arcmsr0:0:16:2): INQUIRY. CDB: 12 40 0 0 24 0 (probe2:arcmsr0:0:16:2): CAM status: Command timeout (probe2:arcmsr0:0:16:2): Retrying command (probe2:arcmsr0:0:16:2): INQUIRY. CDB: 12 40 0 0 24 0 (probe2:arcmsr0:0:16:2): CAM status: Command timeout (probe2:arcmsr0:0:16:2): Retrying command (probe2:arcmsr0:0:16:2): INQUIRY. CDB: 12 40 0 0 24 0 (probe2:arcmsr0:0:16:2): CAM status: Command timeout (probe2:arcmsr0:0:16:2): Retrying command (probe2:arcmsr0:0:16:2): INQUIRY. CDB: 12 40 0 0 24 0 (probe2:arcmsr0:0:16:2): CAM status: Command timeout (probe2:arcmsr0:0:16:2): Error 5, Retries exhausted (probe2:arcmsr0:0:16:3): INQUIRY. CDB: 12 60 0 0 24 0 (probe2:arcmsr0:0:16:3): CAM status: Command timeout (probe2:arcmsr0:0:16:3): Retrying command (probe2:arcmsr0:0:16:3): INQUIRY. CDB: 12 60 0 0 24 0 (probe2:arcmsr0:0:16:3): CAM status: Command timeout (probe2:arcmsr0:0:16:3): Retrying command (probe2:arcmsr0:0:16:3): INQUIRY. CDB: 12 60 0 0 24 0 (probe2:arcmsr0:0:16:3): CAM status: Command timeout (probe2:arcmsr0:0:16:3): Retrying command (probe2:arcmsr0:0:16:3): INQUIRY. CDB: 12 60 0 0 24 0 (probe2:arcmsr0:0:16:3): CAM status: Command timeout (probe2:arcmsr0:0:16:3): Retrying command (probe2:arcmsr0:0:16:3): INQUIRY. CDB: 12 60 0 0 24 0 (probe2:arcmsr0:0:16:3): CAM status: Command timeout (probe2:arcmsr0:0:16:3): Error 5, Retries exhausted (probe2:arcmsr0:0:16:4): INQUIRY. CDB: 12 80 0 0 24 0 (probe2:arcmsr0:0:16:4): CAM status: Command timeout (probe2:arcmsr0:0:16:4): Retrying command (probe2:arcmsr0:0:16:4): INQUIRY. CDB: 12 80 0 0 24 0 (probe2:arcmsr0:0:16:4): CAM status: Command timeout (probe2:arcmsr0:0:16:4): Retrying command (probe2:arcmsr0:0:16:4): INQUIRY. CDB: 12 80 0 0 24 0 (probe2:arcmsr0:0:16:4): CAM status: Command timeout (probe2:arcmsr0:0:16:4): Retrying command (probe2:arcmsr0:0:16:4): INQUIRY. CDB: 12 80 0 0 24 0 (probe2:arcmsr0:0:16:4): CAM status: Command timeout (probe2:arcmsr0:0:16:4): Retrying command (probe2:arcmsr0:0:16:4): INQUIRY. CDB: 12 80 0 0 24 0 (probe2:arcmsr0:0:16:4): CAM status: Command timeout (probe2:arcmsr0:0:16:4): Error 5, Retries exhausted (probe2:arcmsr0:0:16:5): INQUIRY. CDB: 12 a0 0 0 24 0 (probe2:arcmsr0:0:16:5): CAM status: Command timeout (probe2:arcmsr0:0:16:5): Retrying command (probe2:arcmsr0:0:16:5): INQUIRY. CDB: 12 a0 0 0 24 0 (probe2:arcmsr0:0:16:5): CAM status: Command timeout (probe2:arcmsr0:0:16:5): Retrying command (probe2:arcmsr0:0:16:5): INQUIRY. CDB: 12 a0 0 0 24 0 (probe2:arcmsr0:0:16:5): CAM status: Command timeout (probe2:arcmsr0:0:16:5): Retrying command (probe2:arcmsr0:0:16:5): INQUIRY. CDB: 12 a0 0 0 24 0 (probe2:arcmsr0:0:16:5): CAM status: Command timeout (probe2:arcmsr0:0:16:5): Retrying command (probe2:arcmsr0:0:16:5): INQUIRY. CDB: 12 a0 0 0 24 0 (probe2:arcmsr0:0:16:5): CAM status: Command timeout (probe2:arcmsr0:0:16:5): Error 5, Retries exhausted (probe2:arcmsr0:0:16:6): INQUIRY. CDB: 12 c0 0 0 24 0 (probe2:arcmsr0:0:16:6): CAM status: Command timeout (probe2:arcmsr0:0:16:6): Retrying command (probe2:arcmsr0:0:16:6): INQUIRY. CDB: 12 c0 0 0 24 0 (probe2:arcmsr0:0:16:6): CAM status: Command timeout (probe2:arcmsr0:0:16:6): Retrying command (probe2:arcmsr0:0:16:6): INQUIRY. CDB: 12 c0 0 0 24 0 (probe2:arcmsr0:0:16:6): CAM status: Command timeout (probe2:arcmsr0:0:16:6): Retrying command (probe2:arcmsr0:0:16:6): INQUIRY. CDB: 12 c0 0 0 24 0 (probe2:arcmsr0:0:16:6): CAM status: Command timeout (probe2:arcmsr0:0:16:6): Retrying command (probe2:arcmsr0:0:16:6): INQUIRY. CDB: 12 c0 0 0 24 0 (probe2:arcmsr0:0:16:6): CAM status: Command timeout (probe2:arcmsr0:0:16:6): Error 5, Retries exhausted (probe2:arcmsr0:0:16:7): INQUIRY. CDB: 12 e0 0 0 24 0 (probe2:arcmsr0:0:16:7): CAM status: Command timeout (probe2:arcmsr0:0:16:7): Retrying command (probe2:arcmsr0:0:16:7): INQUIRY. CDB: 12 e0 0 0 24 0 (probe2:arcmsr0:0:16:7): CAM status: Command timeout (probe2:arcmsr0:0:16:7): Retrying command (probe2:arcmsr0:0:16:7): INQUIRY. CDB: 12 e0 0 0 24 0 (probe2:arcmsr0:0:16:7): CAM status: Command timeout (probe2:arcmsr0:0:16:7): Retrying command (probe2:arcmsr0:0:16:7): INQUIRY. CDB: 12 e0 0 0 24 0 (probe2:arcmsr0:0:16:7): CAM status: Command timeout (probe2:arcmsr0:0:16:7): Retrying command (probe2:arcmsr0:0:16:7): INQUIRY. CDB: 12 e0 0 0 24 0 (probe2:arcmsr0:0:16:7): CAM status: Command timeout (probe2:arcmsr0:0:16:7): Error 5, Retries exhausted arcmsr:scsi id=15 lun=0 device lost arcmsr:scsi id=17 lun=0 device lost arcmsr:scsi id=1 lun=0 device lost arcmsr:scsi id=9 lun=0 device lost arcmsr:scsi id=10 lun=0 device lost arcmsr:scsi id=11 lun=0 device lost arcmsr:scsi id=12 lun=0 device lost arcmsr:scsi id=13 lun=0 device lost arcmsr:scsi id=14 lun=0 device lost ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ugen0.2: at usbus0 ukbd1: on usbus0 kbd3 at ukbd1 kbd3: ukbd1, generic (0), config:0x0, flags:0x3d0000 uhid0: on usbus0 ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat0=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: stat1=0x7f err=0x7f lsb=0x7f msb=0x7f ata1: reset tp2 stat0=ff stat1=ff devices=0x0 ipmi0: Timed out waiting for GET_DEVICE_ID da0 at arcmsr0 bus 0 scbus0 target 0 lun 0 da0: Fixed Direct Access SCSI-5 device da0: Serial Number 415da12117747090 da0: 166.666MB/s transfers (83.333MHz, offset 32, 16bit) da0: Command Queueing enabled da0: 953869MB (1953523712 512 byte sectors: 255H 63S/T 121601C) GEOM: new disk da0 da1 at mps0 bus 0 scbus1 target 0 lun 0 da1: Fixed Direct Access SCSI-6 device da1: Serial Number 5XW0VMRB da1: 300.000MB/s transfers da1: Command Queueing enabled da1: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) da5 at mps1 bus 0 scbus2 target 1 lun 0 da5: Fixed Direct Access SCSI-6 device da5: Serial Number 5XW1NE0B da5: 300.000MB/s transfers da5: Command Queueing enabled da5: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) da4 at mps0 bus 0 scbus1 target 6 lun 0 da4: Fixed Direct Access SCSI-6 device da4: Serial Number W1F0Q5HB da4: 600.000MB/s transfers da4: Command Queueing enabled da4: 2861588MB (5860533168 512 byte sectors: 255H 63S/T 364801C) da2 at mps0 bus 0 scbus1 target 1 lun 0 da2: Fixed Direct Access SCSI-6 device da2: Serial Number 6XW25VPZ da2: 300.000MB/s transfers da2: Command Queueing enabled da2: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) pass0 at arcmsr0 bus 0 scbus0 target 0 lun 0 pass0: Fixed Direct Access SCSI-5 device pass0: Serial Number 415da12117747090 pass0: 166.666MB/s transfers (83.333MHz, offset 32, 16bit) pass0: Command Queueing enabled da6 at mps1 bus 0 scbus2 target 2 lun 0 da6: Fixed Direct Access SCSI-6 device da6: Serial Number 5XW0YM3C da6: 300.000MB/s transfers da6: Command Queueing enabled da6: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) da8 at mps1 bus 0 scbus2 target 5 lun 0 da8: Fixed Direct Access SCSI-6 device da8: Serial Number 5YD31SF5 da8: 600.000MB/s transfers da8: Command Queueing enabled da8: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) da7 at mps1 bus 0 scbus2 target 3 lun 0 da7: Fixed Direct Access SCSI-6 device da7: Serial Number 5XW1S594 da7: 300.000MB/s transfers da7: Command Queueing enabled da7: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) da10 at mps1 bus 0 scbus2 target 8 lun 0 da10: Fixed Direct Access SCSI-6 device da10: Serial Number 6XW1N03A da10: 300.000MB/s transfers da10: Command Queueing enabled da10: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) GEOM: new disk da1 pass1 at arcmsr0 bus 0 scbus0 target 16 lun 0 pass1: Fixed Processor SCSI-0 device pass2 at mps0 bus 0 scbus1 target 0 lun 0 pass2: Fixed Direct Access SCSI-6 device pass2: Serial Number 5XW0VMRB pass2: 300.000MB/s transfers pass2: Command Queueing enabled pass3 at mps0 bus 0 scbus1 target 1 lun 0 pass3: Fixed Direct Access SCSI-6 device pass3: Serial Number 6XW25VPZ pass3: 300.000MB/s transfers pass3: Command Queueing enabled pass4 at mps0 bus 0 scbus1 target 5 lun 0 pass4: Fixed Direct Access SCSI-6 device pass4: Serial Number S1F0MCEJ pass4: 600.000MB/s transfers pass4: Command Queueing enabled pass5 at mps0 bus 0 scbus1 target 6 lun 0 pass5: Fixed Direct Access SCSI-6 device pass5: Serial Number W1F0Q5HB pass5: 600.000MB/s transfers pass5: Command Queueing enabled pass6 at mps1 bus 0 scbus2 target 1 lun 0 pass6: Fixed Direct Access SCSI-6 device pass6: Serial Number 5XW1NE0B pass6: 300.000MB/s transfers pass6: Command Queueing enabled pass7 at mps1 bus 0 scbus2 target 2 lun 0 pass7: Fixed Direct Access SCSI-6 device pass7: Serial Number 5XW0YM3C pass7: 300.000MB/s transfers pass7: Command Queueing enabled pass8 at mps1 bus 0 scbus2 target 3 lun 0 pass8: Fixed Direct Access SCSI-6 device pass8: Serial Number 5XW1S594 pass8: 300.000MB/s transfers pass8: Command Queueing enabled pass9 at mps1 bus 0 scbus2 target 5 lun 0 pass9: Fixed Direct Access SCSI-6 device pass9: Serial Number 5YD31SF5 pass9: 600.000MB/s transfers pass9: Command Queueing enabled pass10 at mps1 bus 0 scbus2 target 7 lun 0 pass10: Fixed Direct Access SCSI-6 device pass10: Serial Number S1F0KX3V pass10: 600.000MB/s transfers pass10: Command Queueing enabled pass11 at mps1 bus 0 scbus2 target 8 lun 0 pass11: Fixed Direct Access SCSI-6 device pass11: Serial Number 6XW1N03A pass11: 300.000MB/s transfers pass11: Command Queueing enabled pass12 at ahcich0 bus 0 scbus3 target 0 lun 0 pass12: ATA-8 SATA 3.x device pass12: Serial Number 5YD3YMPG pass12: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) pass12: Command Queueing enabled pass13 at ahcich1 bus 0 scbus4 target 0 lun 0 pass13: ATA-8 SATA 2.x device pass13: Serial Number OCZ-2DVTJK3D5M0MLMAF pass13: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) pass13: Command Queueing enabled pass14 at ahcich2 bus 0 scbus5 target 0 lun 0 pass14: ATA-8 SATA 2.x device pass14: Serial Number OCZ-48QYY703EGQ36XIA pass14: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) pass14: Command Queueing enabled da3 at mps0 bus 0 scbus1 target 5 lun 0 da3: Fixed Direct Access SCSI-6 device da3: Serial Number S1F0MCEJ da3: 600.000MB/s transfers da3: Command Queueing enabled da3: 2861588MB (5860533168 512 byte sectors: 255H 63S/T 364801C) pass15 at ahcich3 bus 0 scbus6 target 0 lun 0 pass15: ATA-8 SATA 3.x device pass15: Serial Number PL2311LAG06AJC pass15: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) pass15: Command Queueing enabled ada0 at ahcich0 bus 0 scbus3 target 0 lun 0 ada0: ATA-8 SATA 3.x device ada0: Serial Number 5YD3YMPG ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 ada1 at ahcich1 bus 0 scbus4 target 0 lun 0 ada1: ATA-8 SATA 2.x device ada1: Serial Number OCZ-2DVTJK3D5M0MLMAF ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 57241MB (117231408 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad6 ada2 at ahcich2 bus 0 scbus5 target 0 lun 0 ada2: ATA-8 SATA 2.x device ada2: Serial Number OCZ-48QYY703EGQ36XIA ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 57241MB (117231408 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad8 ada3 at ahcich3 bus 0 scbus6 target 0 lun 0 ada3: ATA-8 SATA 3.x device da9 at mps1 bus 0 scbus2 target 7 lun 0 da9: Fixed Direct Access SCSI-6 device da9: Serial Number S1F0KX3V da9: 600.000MB/s transfers da9: Command Queueing enabled da9: 2861588MB (5860533168 512 byte sectors: 255H 63S/T 364801C) ada3: Serial Number PL2311LAG06AJC ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 3815447MB (7814037168 512 byte sectors: 16H 63S/T 16383C) ada3: Previously was known as ad10 SMP: AP CPU #1 Launched! cpu1 AP: ID: 0x21000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #4 Launched! cpu4 AP: ID: 0x24000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #5 Launched! cpu5 AP: ID: 0x25000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #6 Launched! cpu6 AP: ID: 0x26000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #13 Launched! cpu13 AP: ID: 0x2d000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #8 Launched! cpu8 AP: ID: 0x28000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #11 Launched! cpu11 AP: ID: 0x2b000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #2 Launched! cpu2 AP: ID: 0x22000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #3 Launched! cpu3 AP: ID: 0x23000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #14 Launched! cpu14 AP: ID: 0x2e000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #15 Launched! cpu15 AP: ID: 0x2f000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #23 Launched! cpu23 AP: ID: 0x47000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #12 Launched! cpu12 AP: ID: 0x2c000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #30 Launched! cpu30 AP: ID: 0x4e000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #7 Launched! cpu7 AP: ID: 0x27000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #26 Launched! cpu26 AP: ID: 0x4a000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #31 Launched! cpu31 AP: ID: 0x4f000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #27 Launched! cpu27 AP: ID: 0x4b000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #28 Launched! cpu28 AP: ID: 0x4c000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #16 Launched! cpu16 AP: ID: 0x40000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #19 Launched! cpu19 AP: ID: 0x43000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #24 Launched! cpu24 AP: ID: 0x48000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #25 Launched! cpu25 AP: ID: 0x49000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #17 Launched! cpu17 AP: ID: 0x41000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #18 Launched! cpu18 AP: ID: 0x42000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #29 Launched! cpu29 AP: ID: 0x4d000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #10 Launched! cpu10 AP: ID: 0x2a000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #22 Launched! cpu22 AP: ID: 0x46000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #21 Launched! cpu21 AP: ID: 0x45000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #9 Launched! cpu9 AP: ID: 0x29000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 SMP: AP CPU #20 Launched! cpu20 AP: ID: 0x44000000 VER: 0x80050010 LDR: 0x00000000 DFR: 0xffffffff lint0: 0x00010700 lint1: 0x00000400 TPR: 0x00000000 SVR: 0x000001ff timer: 0x000100ef therm: 0x00010000 err: 0x000000f0 pmc: 0x00010400 ioapic0: routing intpin 1 (ISA IRQ 1) to lapic 33 vector 48 ioapic0: routing intpin 3 (ISA IRQ 3) to lapic 34 vector 48 ioapic0: routing intpin 4 (ISA IRQ 4) to lapic 35 vector 48 ioapic0: routing intpin 7 (ISA IRQ 7) to lapic 36 vector 48 ioapic0: routing intpin 9 (ISA IRQ 9) to lapic 37 vector 48 ioapic0: routing intpin 14 (ISA IRQ 14) to lapic 38 vector 48 ioapic0: routing intpin 15 (ISA IRQ 15) to lapic 39 vector 48 ioapic0: routing intpin 16 (PCI IRQ 16) to lapic 40 vector 48 ioapic0: routing intpin 17 (PCI IRQ 17) to lapic 41 vector 48 ioapic0: routing intpin 18 (PCI IRQ 18) to lapic 42 vector 48 ioapic0: routing intpin 19 (PCI IRQ 19) to lapic 43 vector 48 ioapic0: routing intpin 22 (PCI IRQ 22) to lapic 44 vector 48 msi: Assigning MSI-X IRQ 256 to local APIC 45 vector 48 msi: Assigning MSI-X IRQ 257 to local APIC 46 vector 48 msi: Assigning MSI-X IRQ 259 to local APIC 33 vector 49 msi: Assigning MSI-X IRQ 260 to local APIC 34 vector 49 msi: Assigning MSI-X IRQ 261 to local APIC 35 vector 49 msi: Assigning MSI-X IRQ 262 to local APIC 36 vector 49 msi: Assigning MSI-X IRQ 263 to local APIC 37 vector 49 msi: Assigning MSI-X IRQ 264 to local APIC 38 vector 49 msi: Assigning MSI-X IRQ 265 to local APIC 39 vector 49 msi: Assigning MSI-X IRQ 266 to local APIC 47 vector 48 msi: Assigning MSI-X IRQ 267 to local APIC 40 vector 49 msi: Assigning MSI-X IRQ 268 to local APIC 41 vector 49 msi: Assigning MSI-X IRQ 269 to local APIC 42 vector 49 msi: Assigning MSI-X IRQ 270 to local APIC 43 vector 49 msi: Assigning MSI-X IRQ 271 to local APIC 44 vector 49 msi: Assigning MSI-X IRQ 272 to local APIC 45 vector 49 msi: Assigning MSI-X IRQ 273 to local APIC 46 vector 49 msi: Assigning MSI-X IRQ 274 to local APIC 47 vector 49 msi: Assigning MSI-X IRQ 275 to local APIC 64 vector 48 GEOM: new disk da2 GEOM: new disk da3 GEOM: new disk da4 GEOM: new disk da5 GEOM: new disk da6 GEOM: new disk da7 GEOM: new disk da8 SMP: passed TSC synchronization test GEOM: new disk da9 TSC timecounter discards lower 8 bit(s) GEOM: new disk da10 Timecounter "TSC-low" frequency 8593937 Hz quality 800 GEOM: new disk ada0 GEOM: new disk ada1 GEOM: new disk ada2 GEOM: new disk ada3 Trying to mount root from ufs:/dev/da0p3 [rw]... start_init: trying /sbin/init ZFS filesystem version 5 ZFS storage pool version 28 igb0: Link is up 1000 Mbps Full Duplex Linux ELF exec handler installed splash: image decoder found: daemon_saver iirc# -- Dennis Glatting From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 20:40:12 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D0010106564A; Mon, 9 Jul 2012 20:40:12 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id 4ADAC8FC12; Mon, 9 Jul 2012 20:40:12 +0000 (UTC) Received: from skuns.kiev.zoral.com.ua (localhost [127.0.0.1]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id q69KeKaO081492; Mon, 9 Jul 2012 23:40:20 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5) with ESMTP id q69Ke8P6077996; Mon, 9 Jul 2012 23:40:08 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5/Submit) id q69Ke7CQ077995; Mon, 9 Jul 2012 23:40:07 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Mon, 9 Jul 2012 23:40:07 +0300 From: Konstantin Belousov To: John Baldwin Message-ID: <20120709204007.GW2338@deviant.kiev.zoral.com.ua> References: <201203071318.08241.jhb@freebsd.org> <201203161406.27549.jhb@freebsd.org> <201206060817.54684.jhb@freebsd.org> <201207091138.15655.jhb@freebsd.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="JQJYpj0es6mGpGbU" Content-Disposition: inline In-Reply-To: <201207091138.15655.jhb@freebsd.org> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-4.0 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: freebsd-fs@freebsd.org, pho@freebsd.org Subject: Re: close() of an flock'd file is not atomic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 20:40:12 -0000 --JQJYpj0es6mGpGbU Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jul 09, 2012 at 11:38:15AM -0400, John Baldwin wrote: > Here now is the tested version of the actual fix after the vn_open_vnode() > changes were committed. This is hopefully easier to parse now. >=20 > http://www.FreeBSD.org/~jhb/patches/flock_open_close4.patch Do you need atomic op to set FHASLOCK in vn_open_cred ? I do not think *fp can be shared with other thread there. I thought that vrele() call in vn_closefile() would need a vn_start_write() or vn_start_secondary_write() dance around it, but now I believe it is not needed, since ufs_inactive() handles start of secondary writes on its own. Still, it would be good if Peter could test the patch with snapshotting load just be to safe there. --JQJYpj0es6mGpGbU Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (FreeBSD) iEYEARECAAYFAk/7QacACgkQC3+MBN1Mb4iTRACeOu2eM6kV/PjF/9gnxDeE68BI M8AAoJoD2wTvzkW/yJb8kj9rtutSjdOO =MTYo -----END PGP SIGNATURE----- --JQJYpj0es6mGpGbU-- From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 20:51:22 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3F273106564A; Mon, 9 Jul 2012 20:51:22 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from bigwig.baldwin.cx (bigknife-pt.tunnel.tserv9.chi1.ipv6.he.net [IPv6:2001:470:1f10:75::2]) by mx1.freebsd.org (Postfix) with ESMTP id 13FD58FC22; Mon, 9 Jul 2012 20:51:22 +0000 (UTC) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 6A5B9B963; Mon, 9 Jul 2012 16:51:21 -0400 (EDT) From: John Baldwin To: Konstantin Belousov Date: Mon, 9 Jul 2012 16:48:32 -0400 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p17; KDE/4.5.5; amd64; ; ) References: <201203071318.08241.jhb@freebsd.org> <201207091138.15655.jhb@freebsd.org> <20120709204007.GW2338@deviant.kiev.zoral.com.ua> In-Reply-To: <20120709204007.GW2338@deviant.kiev.zoral.com.ua> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201207091648.32306.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Mon, 09 Jul 2012 16:51:21 -0400 (EDT) Cc: freebsd-fs@freebsd.org, pho@freebsd.org Subject: Re: close() of an flock'd file is not atomic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 20:51:22 -0000 On Monday, July 09, 2012 4:40:07 pm Konstantin Belousov wrote: > On Mon, Jul 09, 2012 at 11:38:15AM -0400, John Baldwin wrote: > > Here now is the tested version of the actual fix after the vn_open_vnode() > > changes were committed. This is hopefully easier to parse now. > > > > http://www.FreeBSD.org/~jhb/patches/flock_open_close4.patch > > Do you need atomic op to set FHASLOCK in vn_open_cred ? I do not think > *fp can be shared with other thread there. Oh, that's true. I had just preserved it from the original code. > I thought that vrele() call in vn_closefile() would need a > vn_start_write() or vn_start_secondary_write() dance around it, but > now I believe it is not needed, since ufs_inactive() handles start of > secondary writes on its own. Still, it would be good if Peter could test > the patch with snapshotting load just be to safe there. Ok. I'm happy to have pho@ test it, but the test will have to use file locking along with snapshots to exercise this case. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Mon Jul 9 22:48:35 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A253D1065670 for ; Mon, 9 Jul 2012 22:48:35 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) by mx1.freebsd.org (Postfix) with ESMTP id 5B11F8FC1E for ; Mon, 9 Jul 2012 22:48:35 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1SoMkq-0006ck-6M for freebsd-fs@freebsd.org; Tue, 10 Jul 2012 00:48:32 +0200 Received: from dyn1247-77.vpn.ic.ac.uk ([129.31.247.77]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 10 Jul 2012 00:48:32 +0200 Received: from johannes by dyn1247-77.vpn.ic.ac.uk with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 10 Jul 2012 00:48:32 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Johannes Totz Date: Mon, 09 Jul 2012 23:48:20 +0100 Lines: 69 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: dyn1247-77.vpn.ic.ac.uk User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20120614 Thunderbird/13.0.1 In-Reply-To: Subject: Re: zfs send glitch X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jul 2012 22:48:35 -0000 On 09/07/2012 18:07, Steven Hartland wrote: > ----- Original Message ----- From: "Johannes Totz" > >> zfs send with verbose flag fails for some reason, whereas omitting the >> verbose flag works (beware of line breaks): >> >> # zfs send -vRI @120203-2320 backup/alexs-imac/120607-0056@120607-0056 | >> zfs receive -vun panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320 >> >> send from @120203-2320 to backup/alexs-imac/120607-0056@120603-2311 >> estimated size is 32.8G >> send from @120603-2311 to backup/alexs-imac/120607-0056@120607-0056 >> estimated size is 8.56G >> total estimated size is 41.3G >> cannot hold 'backup/alexs-imac/120607-0056@120203-2320': pool must be >> upgraded >> WARNING: could not send backup/alexs-imac/120607-0056@120607-0056: >> incremental source (backup/alexs-imac/120607-0056@120203-2320) does not >> exist >> >> And now without verbose flag: >> >> # zfs send -RI @120203-2320 backup/alexs-imac/120607-0056@120607-0056 | >> zfs receive -vu panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320 >> receiving incremental stream of >> backup/alexs-imac/120607-0056@120603-2311 into >> panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320@120603-2311 >> > > Are you sure its the verbose flag which breaks it or does it work on the > second run that works, as we've seen very strange behaviour with send > receive recently but I've not had time to sit down and confirm exactly > what's happening. Yeah, without the -v flag it finished successfully: # zfs send -RI @120203-2320 backup/alexs-imac/120607-0056@120607-0056 | zfs receive -vu panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320 receiving incremental stream of backup/alexs-imac/120607-0056@120603-2311 into panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320@120603-2311 received 33.1GB stream in 19253 seconds (1.76MB/sec) receiving incremental stream of backup/alexs-imac/120607-0056@120607-0056 into panzer/home/jo/backups/alexs-imac/alexs-imac/120203-2320@120607-0056 received 8.82GB stream in 8513 seconds (1.06MB/sec) > > Regards > Steve > > ================================================ > This e.mail is private and confidential between Multiplay (UK) Ltd. and > the person or entity to whom it is addressed. In the event of > misdirection, the recipient is prohibited from using, copying, printing > or otherwise disseminating it or any information contained in it. > In the event of misdirection, illegible or incomplete transmission > please telephone +44 845 868 1337 > or return the E.mail to postmaster@multiplay.co.uk. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Jul 10 06:59:28 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 0E0D5106566B for ; Tue, 10 Jul 2012 06:59:28 +0000 (UTC) (envelope-from pho@holm.cc) Received: from relay01.pair.com (relay01.pair.com [209.68.5.15]) by mx1.freebsd.org (Postfix) with SMTP id 9C34A8FC18 for ; Tue, 10 Jul 2012 06:59:27 +0000 (UTC) Received: (qmail 93463 invoked from network); 10 Jul 2012 06:59:21 -0000 Received: from 87.58.146.107 (HELO x2.osted.lan) (87.58.146.107) by relay01.pair.com with SMTP; 10 Jul 2012 06:59:21 -0000 X-pair-Authenticated: 87.58.146.107 Received: from x2.osted.lan (localhost [127.0.0.1]) by x2.osted.lan (8.14.5/8.14.5) with ESMTP id q6A6xJsv007094; Tue, 10 Jul 2012 08:59:20 +0200 (CEST) (envelope-from pho@x2.osted.lan) Received: (from pho@localhost) by x2.osted.lan (8.14.5/8.14.5/Submit) id q6A6xJIk007093; Tue, 10 Jul 2012 08:59:19 +0200 (CEST) (envelope-from pho) Date: Tue, 10 Jul 2012 08:59:19 +0200 From: Peter Holm To: John Baldwin Message-ID: <20120710065919.GA7051@x2.osted.lan> References: <201203071318.08241.jhb@freebsd.org> <201207091138.15655.jhb@freebsd.org> <20120709204007.GW2338@deviant.kiev.zoral.com.ua> <201207091648.32306.jhb@freebsd.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <201207091648.32306.jhb@freebsd.org> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@freebsd.org Subject: Re: close() of an flock'd file is not atomic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jul 2012 06:59:28 -0000 On Mon, Jul 09, 2012 at 04:48:32PM -0400, John Baldwin wrote: > On Monday, July 09, 2012 4:40:07 pm Konstantin Belousov wrote: > > On Mon, Jul 09, 2012 at 11:38:15AM -0400, John Baldwin wrote: > > > Here now is the tested version of the actual fix after the vn_open_vnode() > > > changes were committed. This is hopefully easier to parse now. > > > > > > http://www.FreeBSD.org/~jhb/patches/flock_open_close4.patch > > > > Do you need atomic op to set FHASLOCK in vn_open_cred ? I do not think > > *fp can be shared with other thread there. > > Oh, that's true. I had just preserved it from the original code. > > > I thought that vrele() call in vn_closefile() would need a > > vn_start_write() or vn_start_secondary_write() dance around it, but > > now I believe it is not needed, since ufs_inactive() handles start of > > secondary writes on its own. Still, it would be good if Peter could test > > the patch with snapshotting load just be to safe there. > > Ok. I'm happy to have pho@ test it, but the test will have to use file > locking along with snapshots to exercise this case. > I'll do that. Regards, - Peter From owner-freebsd-fs@FreeBSD.ORG Tue Jul 10 08:32:40 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BC64D1065675 for ; Tue, 10 Jul 2012 08:32:40 +0000 (UTC) (envelope-from gkontos.mail@gmail.com) Received: from mail-ob0-f182.google.com (mail-ob0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 7D3188FC0C for ; Tue, 10 Jul 2012 08:32:40 +0000 (UTC) Received: by obbun3 with SMTP id un3so2222160obb.13 for ; Tue, 10 Jul 2012 01:32:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=jjZdX3VUk5KYTFL6MRpVC4efZ0rrOqu1jN9Y7XYoQq4=; b=Y3eXONP0Q1dPiaYrPhjxd/RzIzEwtLEM1vT+3gjl+gAvK++TkF8iq5AG/ebOD35fe6 1vBzCetNwdnCuuKg5QPTrjmVVjidrvSsCsUGULtCutPXWhvmZ0EMwdXGMm6x1V5sQVbF Eh7TnTkP28DquoViyoX+loYUpkIdLaDSAldNZefoDmlaGPeecUh8Ha9KG55VFRx0kgT0 nFeEzHyCVpwuooFakurahc870Onw707LtndaD5LybpFfmCF85dT17eFZzPoDWqlrOsL9 yTdQPfPgS+ZORvuHgNzDoAjoM7ZOUYHrzQgvB1a6irRJtIvzqkRFqNuLmYIf1ZFdGqJA VIaQ== MIME-Version: 1.0 Received: by 10.60.30.132 with SMTP id s4mr359525oeh.6.1341909159879; Tue, 10 Jul 2012 01:32:39 -0700 (PDT) Received: by 10.182.209.33 with HTTP; Tue, 10 Jul 2012 01:32:39 -0700 (PDT) In-Reply-To: <1341864787.32803.43.camel@btw.pki2.com> References: <1341864787.32803.43.camel@btw.pki2.com> Date: Tue, 10 Jul 2012 11:32:39 +0300 Message-ID: From: George Kontostanos To: Dennis Glatting Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jul 2012 08:32:40 -0000 On Mon, Jul 9, 2012 at 11:13 PM, Dennis Glatting wrote: > I have a ZFS array of disks where the system simply stops as if forever > blocked by some IO mutex. This happens often and the following is the > output of top: > > last pid: 6075; load averages: 0.00, 0.00, 0.00 up 0+16:54:41 > 13:04:10 > 135 processes: 1 running, 134 sleeping > CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle > Mem: 47M Active, 24M Inact, 18G Wired, 120M Buf, 44G Free > Swap: 32G Total, 32G Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > 2410 root 1 33 0 11992K 2820K zio->i 7 331:25 0.00% > bzip2 > 2621 root 1 52 4 28640K 5544K tx->tx 24 245:33 0.00% > john > 2624 root 1 48 4 28640K 5544K tx->tx 4 239:08 0.00% > john > 2623 root 1 49 4 28640K 5544K tx->tx 7 238:44 0.00% > john > 2640 root 1 42 4 28640K 5420K tx->tx 23 206:51 0.00% > john > 2638 root 1 42 4 28640K 5420K tx->tx 28 206:34 0.00% > john > 2639 root 1 42 4 28640K 5420K tx->tx 9 206:30 0.00% > john > 2637 root 1 42 4 28640K 5420K tx->tx 18 206:24 0.00% > john > > > This system is presently resilvering a disk but these stops have > happened before. > > > iirc# zpool status disk-1 > pool: disk-1 > state: DEGRADED > status: One or more devices is currently being resilvered. The pool > will > continue to function, possibly in a degraded state. > action: Wait for the resilver to complete. > scan: resilver in progress since Sun Jul 8 13:07:46 2012 > 104G scanned out of 12.4T at 1.73M/s, (scan is slow, no > estimated time) > 10.3G resilvered, 0.82% done > config: > > NAME STATE READ WRITE CKSUM > disk-1 DEGRADED 0 0 0 > raidz2-0 DEGRADED 0 0 0 > da1 ONLINE 0 0 0 > da2 ONLINE 0 0 0 > da10 ONLINE 0 0 0 > da9 ONLINE 0 0 0 > da5 ONLINE 0 0 0 > da6 ONLINE 0 0 0 > da7 ONLINE 0 0 0 > replacing-7 DEGRADED 0 0 0 > 17938531774236227186 UNAVAIL 0 0 0 was /dev/da8 > da3 ONLINE 0 0 0 (resilvering) > da8 ONLINE 0 0 0 > da4 ONLINE 0 0 0 > logs > ada2p1 ONLINE 0 0 0 > cache > ada1 ONLINE 0 0 0 > > errors: No known data errors > > > This system has dissimilar disks, which I understand should not be a > problem but the stopping also happened before I started the slow disk > upgrade process. > > The disks are served by: > > * A LSI 9211 flashed to IT, and > * A LSI 2008 controller on the motherboard also flashed to IT. > > The 2008 BIOS and firmware is the most recent from LSI. The motherboard > is a Supermicro H8DG6-F. > > > My question is what should I be looking at and how should I look at it? > There is nothing in the logs or the console, rather the system is > forever paused and entering commands results in no response (it's as if > everything is deadlocked). > > > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Can you post your 'dmesg | grep mps', the FreeBSD version you run? Also, is there any chance that those disks are 4K? -- George Kontostanos Aicom telecoms ltd http://www.aisecure.net From owner-freebsd-fs@FreeBSD.ORG Tue Jul 10 14:31:46 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A96B6106567C for ; Tue, 10 Jul 2012 14:31:46 +0000 (UTC) (envelope-from olivier@gid0.org) Received: from mail-ee0-f54.google.com (mail-ee0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id 377E78FC17 for ; Tue, 10 Jul 2012 14:31:46 +0000 (UTC) Received: by eeke49 with SMTP id e49so20150eek.13 for ; Tue, 10 Jul 2012 07:31:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:x-gm-message-state; bh=pl7zkoLBnf2cVB5P1L2yN3f4R3khxdHZ80TR3otzfVA=; b=lN+oQ4SCpcMrZrH26VI8Whnr+LcPfnm03bRPuYQWMhNeY0sj20Y129cbmqNy906ncx 8n/OeemyKhwUstoS3/99UEt2qx7+btyZDlzQ4l3c3wN9JYe2WVGJJr8cbs/NJnz5eu8/ nKczCohIzInfSjozlquxYSgwNju+/rmDCLGVSoD56NPjj10hvZRD5jFA9Tein5sYiU8H IDfTz3dBiYUS6OsmfZ2BSmyv0gdFqBarg75jlDhM0q7T3G2q3nWZBh55y1XKTmMiU4+R KvhcEeJMepMx+MA2+n5L6cYx2NE5hA3OLnSonr4vvpTSGC5j0OKD6FXiS5V126n6rVG7 WhWA== MIME-Version: 1.0 Received: by 10.152.146.169 with SMTP id td9mr44663156lab.42.1341930704918; Tue, 10 Jul 2012 07:31:44 -0700 (PDT) Received: by 10.112.100.68 with HTTP; Tue, 10 Jul 2012 07:31:44 -0700 (PDT) In-Reply-To: <4FE83C6E.2030600@gmail.com> References: <20120621140149.GA59722@reks> <4FE83C6E.2030600@gmail.com> Date: Tue, 10 Jul 2012 16:31:44 +0200 Message-ID: From: Olivier Smedts To: Volodymyr Kostyrko Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQnBNh7dVMnHkcjlyOGupfDe/gokW6tLRlpGzBjhA3jTIAiU2V5LVOHoZ5R1dWsXcE3b9N/r Cc: freebsd-fs@freebsd.org Subject: Re: [RFC] tmpfs RB-Tree for directory entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jul 2012 14:31:46 -0000 2012/6/25 Volodymyr Kostyrko : > I applied patch on my test machines on 9-STABLE (i386/amd64). Both use tmpfs > when building ports. No regression for three days (compiling > openoffice/chromium). Just a "me too". -- Olivier Smedts _ ASCII ribbon campaign ( ) e-mail: olivier@gid0.org - against HTML email & vCards X www: http://www.gid0.org - against proprietary attachments / \ "Il y a seulement 10 sortes de gens dans le monde : ceux qui comprennent le binaire, et ceux qui ne le comprennent pas." From owner-freebsd-fs@FreeBSD.ORG Tue Jul 10 18:08:20 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DD7601065670 for ; Tue, 10 Jul 2012 18:08:20 +0000 (UTC) (envelope-from dg@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id A29F18FC1C for ; Tue, 10 Jul 2012 18:08:20 +0000 (UTC) Received: from btw.pki2.com (btw.pki2.com [192.168.23.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q6AI8Fi7074632 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Tue, 10 Jul 2012 11:08:16 -0700 (PDT) (envelope-from dg@pki2.com) Date: Tue, 10 Jul 2012 11:08:15 -0700 (PDT) From: Dennis Glatting X-X-Sender: dennisg@btw.pki2.com To: George Kontostanos In-Reply-To: Message-ID: References: <1341864787.32803.43.camel@btw.pki2.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q6AI8Fi7074632 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jul 2012 18:08:21 -0000 On Tue, 10 Jul 2012, George Kontostanos wrote: > On Mon, Jul 9, 2012 at 11:13 PM, Dennis Glatting wrote: >> I have a ZFS array of disks where the system simply stops as if forever >> blocked by some IO mutex. This happens often and the following is the >> output of top: >> >> last pid: 6075; load averages: 0.00, 0.00, 0.00 up 0+16:54:41 >> 13:04:10 >> 135 processes: 1 running, 134 sleeping >> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle >> Mem: 47M Active, 24M Inact, 18G Wired, 120M Buf, 44G Free >> Swap: 32G Total, 32G Free >> >> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU >> COMMAND >> 2410 root 1 33 0 11992K 2820K zio->i 7 331:25 0.00% >> bzip2 >> 2621 root 1 52 4 28640K 5544K tx->tx 24 245:33 0.00% >> john >> 2624 root 1 48 4 28640K 5544K tx->tx 4 239:08 0.00% >> john >> 2623 root 1 49 4 28640K 5544K tx->tx 7 238:44 0.00% >> john >> 2640 root 1 42 4 28640K 5420K tx->tx 23 206:51 0.00% >> john >> 2638 root 1 42 4 28640K 5420K tx->tx 28 206:34 0.00% >> john >> 2639 root 1 42 4 28640K 5420K tx->tx 9 206:30 0.00% >> john >> 2637 root 1 42 4 28640K 5420K tx->tx 18 206:24 0.00% >> john >> >> >> This system is presently resilvering a disk but these stops have >> happened before. >> >> >> iirc# zpool status disk-1 >> pool: disk-1 >> state: DEGRADED >> status: One or more devices is currently being resilvered. The pool >> will >> continue to function, possibly in a degraded state. >> action: Wait for the resilver to complete. >> scan: resilver in progress since Sun Jul 8 13:07:46 2012 >> 104G scanned out of 12.4T at 1.73M/s, (scan is slow, no >> estimated time) >> 10.3G resilvered, 0.82% done >> config: >> >> NAME STATE READ WRITE CKSUM >> disk-1 DEGRADED 0 0 0 >> raidz2-0 DEGRADED 0 0 0 >> da1 ONLINE 0 0 0 >> da2 ONLINE 0 0 0 >> da10 ONLINE 0 0 0 >> da9 ONLINE 0 0 0 >> da5 ONLINE 0 0 0 >> da6 ONLINE 0 0 0 >> da7 ONLINE 0 0 0 >> replacing-7 DEGRADED 0 0 0 >> 17938531774236227186 UNAVAIL 0 0 0 was /dev/da8 >> da3 ONLINE 0 0 0 (resilvering) >> da8 ONLINE 0 0 0 >> da4 ONLINE 0 0 0 >> logs >> ada2p1 ONLINE 0 0 0 >> cache >> ada1 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> This system has dissimilar disks, which I understand should not be a >> problem but the stopping also happened before I started the slow disk >> upgrade process. >> >> The disks are served by: >> >> * A LSI 9211 flashed to IT, and >> * A LSI 2008 controller on the motherboard also flashed to IT. >> >> The 2008 BIOS and firmware is the most recent from LSI. The motherboard >> is a Supermicro H8DG6-F. >> >> >> My question is what should I be looking at and how should I look at it? >> There is nothing in the logs or the console, rather the system is >> forever paused and entering commands results in no response (it's as if >> everything is deadlocked). >> >> >> >> >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > Can you post your 'dmesg | grep mps', the FreeBSD version you run? > Also, is there any chance that those disks are 4K? > I sent that in another post but included it below. Yes, the disks are a mix. I'm presently migrating 2TB crappy disks, and some 2TB not-so-crappy disks, to 3TB crappy-unknown disks. However: 1) Why would a mix of 512/4k disks in a ZFS volume lock out a hardware RAID1 volume on another controller? 2) Is there are known problem, other than performance, mixing 512/4k? 3) Related: How does a SSD array of block size foo impact an array of sectory size bar? Thanks. iirc> dmesg | grep mps mps0: port 0xd000-0xd0ff mem 0xdfe3c000-0xdfe3ffff,0xdfe40000-0xdfe7ffff irq 19 at device 0.0 on pci4 mps0: Firmware: 13.00.57.00, Driver: 14.00.00.01-fbsd mps0: IOCCapabilities: 1285c mps0: attempting to allocate 1 MSI-X vectors (15 supported) mps0: using IRQ 256 for MSI-X mps1: port 0xc000-0xc0ff mem 0xdfd3c000-0xdfd3ffff,0xdfd40000-0xdfd7ffff irq 16 at device 0.0 on pci3 mps1: Firmware: 13.00.57.00, Driver: 14.00.00.01-fbsd mps1: IOCCapabilities: 1285c mps1: attempting to allocate 1 MSI-X vectors (15 supported) mps1: using IRQ 257 for MSI-X da1 at mps0 bus 0 scbus1 target 0 lun 0 da5 at mps1 bus 0 scbus2 target 1 lun 0 da4 at mps0 bus 0 scbus1 target 6 lun 0 da2 at mps0 bus 0 scbus1 target 1 lun 0 da6 at mps1 bus 0 scbus2 target 2 lun 0 da8 at mps1 bus 0 scbus2 target 5 lun 0 da7 at mps1 bus 0 scbus2 target 3 lun 0 da10 at mps1 bus 0 scbus2 target 8 lun 0 pass2 at mps0 bus 0 scbus1 target 0 lun 0 pass3 at mps0 bus 0 scbus1 target 1 lun 0 pass4 at mps0 bus 0 scbus1 target 5 lun 0 pass5 at mps0 bus 0 scbus1 target 6 lun 0 pass6 at mps1 bus 0 scbus2 target 1 lun 0 pass7 at mps1 bus 0 scbus2 target 2 lun 0 pass8 at mps1 bus 0 scbus2 target 3 lun 0 pass9 at mps1 bus 0 scbus2 target 5 lun 0 pass10 at mps1 bus 0 scbus2 target 7 lun 0 pass11 at mps1 bus 0 scbus2 target 8 lun 0 da3 at mps0 bus 0 scbus1 target 5 lun 0 da9 at mps1 bus 0 scbus2 target 7 lun 0 iirc> uname -a FreeBSD iirc 9.0-STABLE FreeBSD 9.0-STABLE #14: Sun Jul 8 16:54:00 PDT 2012 root@iirc:/sys/amd64/compile/SMUNI amd64 > -- > George Kontostanos > Aicom telecoms ltd > http://www.aisecure.net > From owner-freebsd-fs@FreeBSD.ORG Tue Jul 10 18:57:38 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BBB12106564A for ; Tue, 10 Jul 2012 18:57:38 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm15-vm6.bullet.mail.ne1.yahoo.com (nm15-vm6.bullet.mail.ne1.yahoo.com [98.138.91.108]) by mx1.freebsd.org (Postfix) with SMTP id 527448FC12 for ; Tue, 10 Jul 2012 18:57:38 +0000 (UTC) Received: from [98.138.90.51] by nm15.bullet.mail.ne1.yahoo.com with NNFMP; 10 Jul 2012 18:57:37 -0000 Received: from [98.138.89.248] by tm4.bullet.mail.ne1.yahoo.com with NNFMP; 10 Jul 2012 18:57:37 -0000 Received: from [127.0.0.1] by omp1040.mail.ne1.yahoo.com with NNFMP; 10 Jul 2012 18:57:37 -0000 X-Yahoo-Newman-Property: ymail-5 X-Yahoo-Newman-Id: 561835.97288.bm@omp1040.mail.ne1.yahoo.com Received: (qmail 66562 invoked by uid 60001); 10 Jul 2012 18:57:37 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1341946657; bh=kqUJLmv1ubi23eNij34HyHuxtIJmJdykilJezcvlrDw=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=JuZN5DjjOsJSZ7yRiNwOaDvXaMhlP7Wys02QQBxBpLCkDHnqQHFJ7KkmTuSS4Qs49mG831wunpR84UDZQmwfCg7NMub0FL9NlvMECqZp46ZfpHI+Yys+QnfnUe5V3sk02bP4AQB/a1hWeMDrDRuX5TKb+4frodfhNVZN9YtIxlI= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=0pGDsiRVDpfzx6Q31fnsNdOu4MVHPOaR/P0TeauIc4IJzFlCRr4izInORWi6tU7AZSCUu3qftNk5zH1xxw/+uHFdekBXyUpVSBNv2BIvmvqD9b5GrlwIp6WwWT9NSI9bg3krSM64h8vzD0efuvPsY6zBy4anEuBoc+TmgDiGE3s=; X-YMail-OSG: uJ9owasVM1nMMBdYPPcpa__jIrB5Ys53Phm_N9YHIBA1fPP th7KtXtigRiz_Ix8c1v7yGvbigwuYyzu3CtCRhQz_MVTgn193Oj9f3X0fCup qV.YJ0JR8YIQX5v2pFVZ1N41ltAiwtJcmj.D5UXY8us0Ig7EaZdILFbJHm6S ivFndhlmwcoLmaZu5y2NNxlFPQ0b7gHSxQeyKWLLZ8xgfB7aVksvslRMPUbI KYKVrMO1hL_XW6G2etfa_O.ZxT9mzMajKfBsaPy92GomaHsRUy4IYrWWoRrX UEXmz1e29pLKvtT4zARVZ5PiVDW9i9NZqbXvcksLRDS0hgSosD2WJMRMuv9N TcglK5qGfDytAw5T6XjjYZVpFYQLJ5s0yEzSWw84r9rAOLp4WFHtMyMPEOLz Y Received: from [32.178.134.54] by web122505.mail.ne1.yahoo.com via HTTP; Tue, 10 Jul 2012 11:57:37 PDT X-Mailer: YahooMailClassic/15.0.8 YahooMailWebService/0.8.120.356233 Message-ID: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> Date: Tue, 10 Jul 2012 11:57:37 -0700 (PDT) From: Jason Usher To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: chaining JBOD chassic to server ... why am I scared ? (ZFS) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jul 2012 18:57:38 -0000 The de-facto configuration the smart folks are using for ZFS seems to be: - 16/24/36 drive supermicro chassis - LSI 9211-8i internal cards - ZFS and probably raidz2 or raidz3 vdevs Ok, fine. But then I see some even smarter folks attaching the 48-drive 4U JBOD chassis to this configuration, probably using a different LSI card that has an external SAS cable. So ... 84 drives accessible to ZFS on one system. In terms of space and money efficiency, it sounds really great - fewer systems to manage, etc. But this scares me ... - two different power sources - so the "head unit" can lose power independent of the JBOD device ... how well does that turn out ? - external cabling - has anyone just yanked that external SAS cable a few times, and what does that look like ? - If you have a single SLOG, or a single L2ARC device, where do you put it ? And then what happens if "the other half" of the system detaches from the half that the SLOG/L2ARC is in ? - ... any number of other weird things ? Just how well does ZFS v28 deal with these kind of situations, and do I have a good reason to be awfully shy about doing this ? From owner-freebsd-fs@FreeBSD.ORG Tue Jul 10 19:31:10 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E1E731065670 for ; Tue, 10 Jul 2012 19:31:10 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-qc0-f182.google.com (mail-qc0-f182.google.com [209.85.216.182]) by mx1.freebsd.org (Postfix) with ESMTP id 960578FC0C for ; Tue, 10 Jul 2012 19:31:10 +0000 (UTC) Received: by qcsg15 with SMTP id g15so359030qcs.13 for ; Tue, 10 Jul 2012 12:31:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=efy/IfOav50Jvtke62aNcuLidAPjssQ/Hc4KQFLwCXk=; b=mw+y6QAl4NcCHSnw1M8qdMx+75R4l+tBLMAsQnUZZrJWHDkEC69Dk7VzT1xqisieFy sYhVMgIjJV49FbbIM8N5QPya8d7qyTTpdVA4eFp1qvetC3QwcfRyg5rWIY5gsD1MN+z0 sbPZIi3agCroPd/6F2eYeDrD1kemF0yXHv/W7OWrWF5lBZ8UwV39H5FFfT1mW38bD6oi aucTWMQ8Uh09A+bhM/DZTmE5ZfthOTJbf1d6c5L2hQoEkEtcok+mIJN8Qf5l6Ja8AOz3 JwQJAYSrMHSUc4Tem65aUGk6gwwaT9W661HqPXo4owvD2V/glYLG6NDutjg/r94V9u8w pXkQ== MIME-Version: 1.0 Received: by 10.224.117.13 with SMTP id o13mr82176221qaq.73.1341948669858; Tue, 10 Jul 2012 12:31:09 -0700 (PDT) Sender: rincebrain@gmail.com Received: by 10.229.40.4 with HTTP; Tue, 10 Jul 2012 12:31:09 -0700 (PDT) In-Reply-To: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> References: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> Date: Tue, 10 Jul 2012 15:31:09 -0400 X-Google-Sender-Auth: QRopeJgATBLnsXHZB97Tw4nmQq4 Message-ID: From: Rich To: Jason Usher Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org Subject: Re: chaining JBOD chassic to server ... why am I scared ? (ZFS) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jul 2012 19:31:11 -0000 There's not really a visible difference between either the head node or JBOD(s) losing power and e.g. a backplane failure - power lost, and then whatever happens depends on your disk configuration and distribution. The Supermicro chassis in question have the option to take 2.5 internal drives if desired, which is where I'd suggest root+(SLOG,L2ARC) [though the 36d ones don't, IIRC]. You may also get more mileage out of the 9207-8[ie] - not much cost difference, PCIe gen 3 and newer chip. [Can't speak to how it performs in practice other than that reviews seem to be positive; mine haven't arrived yet.] - Rich On Tue, Jul 10, 2012 at 2:57 PM, Jason Usher wrote: > The de-facto configuration the smart folks are using for ZFS seems to be: > > - 16/24/36 drive supermicro chassis > - LSI 9211-8i internal cards > - ZFS and probably raidz2 or raidz3 vdevs > > Ok, fine. But then I see some even smarter folks attaching the 48-drive 4U JBOD chassis to this configuration, probably using a different LSI card that has an external SAS cable. > > So ... 84 drives accessible to ZFS on one system. In terms of space and money efficiency, it sounds really great - fewer systems to manage, etc. > > But this scares me ... > > - two different power sources - so the "head unit" can lose power independent of the JBOD device ... how well does that turn out ? > > - external cabling - has anyone just yanked that external SAS cable a few times, and what does that look like ? > > - If you have a single SLOG, or a single L2ARC device, where do you put it ? And then what happens if "the other half" of the system detaches from the half that the SLOG/L2ARC is in ? > > - ... any number of other weird things ? > > > Just how well does ZFS v28 deal with these kind of situations, and do I have a good reason to be awfully shy about doing this ? > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Jul 10 19:48:58 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 122831065687 for ; Tue, 10 Jul 2012 19:48:58 +0000 (UTC) (envelope-from toasty@dragondata.com) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id B2C688FC0A for ; Tue, 10 Jul 2012 19:48:57 +0000 (UTC) Received: by yenl8 with SMTP id l8so480582yen.13 for ; Tue, 10 Jul 2012 12:48:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dragondata.com; s=google; h=content-type:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=6izRNJUnPs6Zohq7vCCXihk8tcKll9x5+AT5oVzjbqE=; b=n3OZbE4AWR9MlmcDxdBBr7NPB3EkxkaTrM/nU4OTl76J5qv3BWPqNXOhOBVZ3f4g5L 6BviC4lBiOw+kqxNFGRZhakoHOhVbqeaO9EXcJs5o1QrgXMlXJGT1g7yfgh2zIgdrJEA vgzhx8+ONN86IoAHJdFgWOKT+B5ybc5LZVaSk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=6izRNJUnPs6Zohq7vCCXihk8tcKll9x5+AT5oVzjbqE=; b=TeNFDeYjP8B9T7v26nP7GmxDP4BwM/EI9qi6dhTLGqsQXjglM07le+Bggewol7hvkW lmX/jPjOM52PMZqP4hFlqfWKLNNN1Mu4zK/yh1POu1r/3WLA3ZkrZeIS1EIszHLA+Lcy IGu6taMjoCxKIoWO4Za151W9pczvbhBckPRwwMfLtlj4P2PxDbuuo9pvs4MHG9WTSFjL WReeAIN3oy+Ccv3Ux8sNm/yfrbKSjYqgbH0/qENu+T9tj/MPjbTaSvCwRSRIRRJ+8YGx TVQiqJBr+hzI2cl78XSDokRw30tfslvS4/8wffLsoupixw2aZvEamo97WEHa3RjuZ0GS Lp2g== Received: by 10.43.69.12 with SMTP id ya12mr23712317icb.50.1341949736924; Tue, 10 Jul 2012 12:48:56 -0700 (PDT) Received: from static177.us.your.org (static177.us.your.org. [204.9.55.177]) by mx.google.com with ESMTPS id ay5sm12424400igb.15.2012.07.10.12.48.52 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 10 Jul 2012 12:48:55 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.0 \(1485\)) From: Kevin Day In-Reply-To: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> Date: Tue, 10 Jul 2012 14:48:51 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: References: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> To: Jason Usher X-Mailer: Apple Mail (2.1485) X-Gm-Message-State: ALoCoQmzJHb6qbrfw141ZIs6Kwl4UJL8XGKJahn7pued+3U5U7Htan924Nls0A6dFWw+jaL1mlyj Cc: freebsd-fs@freebsd.org Subject: Re: chaining JBOD chassic to server ... why am I scared ? (ZFS) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Jul 2012 19:48:58 -0000 On Jul 10, 2012, at 1:57 PM, Jason Usher wrote: > The de-facto configuration the smart folks are using for ZFS seems to = be: >=20 > - 16/24/36 drive supermicro chassis > - LSI 9211-8i internal cards > - ZFS and probably raidz2 or raidz3 vdevs >=20 > Ok, fine. But then I see some even smarter folks attaching the = 48-drive 4U JBOD chassis to this configuration, probably using a = different LSI card that has an external SAS cable. >=20 > So ... 84 drives accessible to ZFS on one system. In terms of space = and money efficiency, it sounds really great - fewer systems to manage, = etc. >=20 > But this scares me ... >=20 > - two different power sources - so the "head unit" can lose power = independent of the JBOD device ... how well does that turn out ? >=20 > - external cabling - has anyone just yanked that external SAS cable a = few times, and what does that look like ? >=20 > - If you have a single SLOG, or a single L2ARC device, where do you = put it ? And then what happens if "the other half" of the system = detaches from the half that the SLOG/L2ARC is in ? >=20 > - ... any number of other weird things ? >=20 >=20 > Just how well does ZFS v28 deal with these kind of situations, and do = I have a good reason to be awfully shy about doing this ? >=20 We do this for ftpmirror.your.org (which is ftp3.us.freebsd.org & = others). It's got an LSI 9280 in it, which has 3 external chassis (each = with 24 3TB drives) attached to it. Before putting into use, we = experimented with pulling the power/data cables from random places while = using it. Nothing we did was any worse than the whole system just losing = power. The only difference was that in some cases losing all the storage = would hang the server until it was power cycled, but again=85 no worse = than if everything lost power. If something goes bad, it's pretty = likely things are going to go down, no matter the physical topology. = There was no crazy data loss or anything if that's what you're worried = about. -- Kevin From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 00:16:06 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 807E8106566C for ; Wed, 11 Jul 2012 00:16:06 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 3DEA78FC12 for ; Wed, 11 Jul 2012 00:16:05 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id q6B08dFe027114; Tue, 10 Jul 2012 19:08:39 -0500 (CDT) Date: Tue, 10 Jul 2012 19:08:39 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jason Usher In-Reply-To: <1341863894.36655.YahooMailClassic@web122501.mail.ne1.yahoo.com> Message-ID: References: <1341863894.36655.YahooMailClassic@web122501.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Tue, 10 Jul 2012 19:08:39 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: vdev/pool math with combined raidzX vdevs... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 00:16:06 -0000 On Mon, 9 Jul 2012, Jason Usher wrote: > > Am I really the only person worrying about the interactive failure properties of combining vdevs into a pool ? Yes. You are the only one. The strength of the individual vdev is the primary determining factor of the strength of the pool. However, one must not discount the "forklift" factor and the "fat finger" factor which quickly become more significant factors to the health of your pool. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 01:19:08 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 490EB1065674 for ; Wed, 11 Jul 2012 01:19:08 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 0A9608FC12 for ; Wed, 11 Jul 2012 01:19:07 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id q6B1J6DC027267; Tue, 10 Jul 2012 20:19:06 -0500 (CDT) Date: Tue, 10 Jul 2012 20:19:05 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jason Usher In-Reply-To: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> Message-ID: References: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Tue, 10 Jul 2012 20:19:06 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: chaining JBOD chassic to server ... why am I scared ? (ZFS) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 01:19:08 -0000 On Tue, 10 Jul 2012, Jason Usher wrote: > > But this scares me ... > > - two different power sources - so the "head unit" can lose power > independent of the JBOD device ... how well does that turn out ? Most of your concerns are things which have been normal for fiber channel based arrays for quite a few years already. SAS cables are shorter so there is a better chance that everything is on the same power and in the same rack. >From my limited experience, getting everything on the same power helps with managing things. > Just how well does ZFS v28 deal with these kind of situations, and > do I have a good reason to be awfully shy about doing this ? I have been on zfs mailing lists for many years and few of the issues reported have been due to ZFS. Usually problems are due to failing memory, bad cables, and SATA drives on SAS expanders. SAS disks work better in large arrays. There have been people running several hundred disks without problem. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 01:51:59 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 98DD31065680 for ; Wed, 11 Jul 2012 01:51:59 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-pb0-f54.google.com (mail-pb0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id 6B3658FC0C for ; Wed, 11 Jul 2012 01:51:59 +0000 (UTC) Received: by pbbro2 with SMTP id ro2so1343300pbb.13 for ; Tue, 10 Jul 2012 18:51:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=lqhDrKe8FtPbPHUjW+ajtB+c2EJQAw23zYm6IJZG2eY=; b=CnNw+l9Y6U5RUBNJ1pUAJvVHgPmVe35pK0859p5MGBIgpCwpc3GgeciwP04KZSa+JD ktwVvD9RuoC0HGS4k1lsPGXONJhIK12EvHgn/dEdMaj4d2xVkJ2W7s5g6Nr1q1Lqm5Vp OX425U5HKP3M2A+5tP6sIJEltgINaJSKrBdinJh7Gh9+WVD8H4Z1mk8Rv/+8byPyGJ6B QO81EU01ed+AJvyH2Gh/8nVyLldyBThtZ2yfZsdv5TpeuKIWO+07LnNt6BIuQYjNPdKl RMNkvdyy0/tZtoOOsl4+Y/9/Z4ElWm4VYIWY6wIVAgVVTQl7MqDieH6b3jZZVdFZi5/l BzVA== MIME-Version: 1.0 Received: by 10.68.201.7 with SMTP id jw7mr19508197pbc.60.1341971519213; Tue, 10 Jul 2012 18:51:59 -0700 (PDT) Sender: rincebrain@gmail.com Received: by 10.68.38.10 with HTTP; Tue, 10 Jul 2012 18:51:59 -0700 (PDT) In-Reply-To: References: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> Date: Tue, 10 Jul 2012 21:51:59 -0400 X-Google-Sender-Auth: 2R6Y52RMBN4xCTBmofn5Qt2X30g Message-ID: From: Rich To: Bob Friesenhahn Content-Type: text/plain; charset=UTF-8 Cc: Jason Usher , freebsd-fs@freebsd.org Subject: Re: chaining JBOD chassic to server ... why am I scared ? (ZFS) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 01:51:59 -0000 *waves with his 360 disks attached to single head node* c.f. http://www.senecadata.com/products/vendor-partners/LSI/PDFs/Seneca-PSC-LSI.pdf (which is not me, but is relevant) - Rich On Tue, Jul 10, 2012 at 9:19 PM, Bob Friesenhahn wrote: > On Tue, 10 Jul 2012, Jason Usher wrote: >> >> >> But this scares me ... >> >> - two different power sources - so the "head unit" can lose power >> independent of the JBOD device ... how well does that turn out ? > > > Most of your concerns are things which have been normal for fiber channel > based arrays for quite a few years already. SAS cables are shorter so there > is a better chance that everything is on the same power and in the same > rack. > >> From my limited experience, getting everything on the same power helps > > with managing things. > > >> Just how well does ZFS v28 deal with these kind of situations, and do I >> have a good reason to be awfully shy about doing this ? > > > I have been on zfs mailing lists for many years and few of the issues > reported have been due to ZFS. Usually problems are due to failing memory, > bad cables, and SATA drives on SAS expanders. SAS disks work better in > large arrays. > > There have been people running several hundred disks without problem. > > Bob > -- > Bob Friesenhahn > bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 07:49:41 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D3471065672 for ; Wed, 11 Jul 2012 07:49:41 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm17-vm3.bullet.mail.ne1.yahoo.com (nm17-vm3.bullet.mail.ne1.yahoo.com [98.138.91.147]) by mx1.freebsd.org (Postfix) with SMTP id 1A9AE8FC0C for ; Wed, 11 Jul 2012 07:49:41 +0000 (UTC) Received: from [98.138.90.51] by nm17.bullet.mail.ne1.yahoo.com with NNFMP; 11 Jul 2012 07:49:34 -0000 Received: from [98.138.89.173] by tm4.bullet.mail.ne1.yahoo.com with NNFMP; 11 Jul 2012 07:49:34 -0000 Received: from [127.0.0.1] by omp1029.mail.ne1.yahoo.com with NNFMP; 11 Jul 2012 07:49:34 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 321567.77684.bm@omp1029.mail.ne1.yahoo.com Received: (qmail 7236 invoked by uid 60001); 11 Jul 2012 07:49:34 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1341992974; bh=v3IIEPkErMmRtsT+wsLsL9K2KECtuBczgjmeKZbHQtY=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=m53/HSe+RvYXv+1E/oJNmZim31sN0GTGlFgbalVDacuLG+LjS5scZhEpEIgtB4w7lnxsw/X/v9nfThxr2CvuY7rnQYFFSAQZqivGqEsGSr/WY/+dvws1PQYGh7TvKHKgmRCwtSdzA+qG7+CusBKvGMdGWDi//iaql/xjfMgNd7k= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=XtyQTY3o1ZpE0EPba5QWGz0bKUTX4q0n00f2aAGYkf5bqpr3z9Y11aBAk7JCAKoKInXJk69qxn9VrPdgBnX428xIszlnCr0oXB56vvOfhJhsTDAGFzFmMjTl9hKBvEnoHQFmaS1HL5jJTbMKoq9Ymf9os3rqj5VfQ/NqIe+qG9c=; X-YMail-OSG: nHxfvCEVM1lfiEDPIwO2aNhLNaepZwAbwZK0TYmeYCP4gO4 WHUZ3qPSf5tva26OUf.HwKWCtCPkNZQVuKka4KmP1Q15Nu6qNvuZDUyKCXae 0Xj_KKFEvgZqxChmWBEhA2NvoJyAYZXIZycQT0_PKm4pCd.rbyoqrsN4R2nl lojdKZ4K.34uOR39M.UQhnushDrJMHvZU.ycG.gJKhiNYewENAg5J97WPAIQ tLWCglfEmYr_ogNyFh1ACDh8WFQED5DI4riqyofE8En3b1oHFsdR_8DxZfue Yom7cDhjxpbbH_9Ylnko_JmyAzo5eLjr.PuXEZpmETgaih8OzJcx.sj7juoB E_lK69AwoMF1.8LrpHs4gqga6giMV_ws4aUMks7pUdmcAq4hUSxPRLXrX24o WmnL3vecJNeyyrbUcnBQv.g-- Received: from [12.202.173.2] by web122503.mail.ne1.yahoo.com via HTTP; Wed, 11 Jul 2012 00:49:34 PDT X-Mailer: YahooMailClassic/15.0.8 YahooMailWebService/0.8.120.356233 Message-ID: <1341992974.53118.YahooMailClassic@web122503.mail.ne1.yahoo.com> Date: Wed, 11 Jul 2012 00:49:34 -0700 (PDT) From: Jason Usher To: Bob Friesenhahn In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: vdev/pool math with combined raidzX vdevs... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 07:49:41 -0000 =0AHello Bob,=0A=0A--- On Tue, 7/10/12, Bob Friesenhahn wrote:=0A=0A> > Am I really the only person worrying about the= =0A> interactive failure properties of combining vdevs into a=0A> pool ?=0A= > =0A> Yes.=A0 You are the only one.=A0 The strength of the=0A> individual = vdev is the primary determining factor of the=0A> strength of the pool.=0A= =0A=0AThanks for responding. So I must be mistaken, and the failure probab= ility of each vdev is not additive ? As I mentioned earlier in the thread,= I am not a probability person, nor would I trust my own calculations if I = tried.=0A=0ABecause if it is additive, combining vdevs erases about half of= the difference between raidz2 and raidz3, which I think is fairly signific= ant.=0A=0ACan we at least agree that it's not the same as a lone vdev ? If= I can destroy a vdev by blowing 4 disks in it, OR by blowing 4 disks in so= me other vdev, that's a higher risk than if it were alone... From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 08:36:07 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ED757106567E for ; Wed, 11 Jul 2012 08:36:07 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id 500378FC20 for ; Wed, 11 Jul 2012 08:36:07 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id q6B8Zqjb097932 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Wed, 11 Jul 2012 11:35:56 +0300 (EEST) (envelope-from daniel@digsys.bg) Message-ID: <4FFD3AE8.8050608@digsys.bg> Date: Wed, 11 Jul 2012 11:35:52 +0300 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:10.0.5) Gecko/20120607 Thunderbird/10.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: chaining JBOD chassic to server ... why am I scared ? (ZFS) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 08:36:08 -0000 On 11.07.12 04:51, Rich wrote: > *waves with his 360 disks attached to single head node* Perhaps share some hints on sizing the system? Memory, CPU.. Daniel From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 13:49:08 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7ADA71065674 for ; Wed, 11 Jul 2012 13:49:08 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id C7F5E8FC12 for ; Wed, 11 Jul 2012 13:49:07 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id q6BDn4or029874; Wed, 11 Jul 2012 08:49:04 -0500 (CDT) Date: Wed, 11 Jul 2012 08:49:04 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jason Usher In-Reply-To: <1341992974.53118.YahooMailClassic@web122503.mail.ne1.yahoo.com> Message-ID: References: <1341992974.53118.YahooMailClassic@web122503.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Wed, 11 Jul 2012 08:49:04 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: vdev/pool math with combined raidzX vdevs... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 13:49:08 -0000 On Wed, 11 Jul 2012, Jason Usher wrote: > > Thanks for responding. So I must be mistaken, and the failure > probability of each vdev is not additive ? As I mentioned earlier > in the thread, I am not a probability person, nor would I trust my > own calculations if I tried. The probabilty is indeed additive just as you say. My point is that the fundamental integrity is offered at the vdev level. If a vdev fails, then the whole pool is gone. The MTTDL calculations for various vdev topologies vary by orders of magnitude, which tends to make the additive nature of more vdevs insignificant. Here are some useful blog articles about MTTDL: https://blogs.oracle.com/relling/entry/raid_recommendations_space_vs_mttdl http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html http://www.servethehome.com/raid-reliability-failure-anthology-part-1-primer/ Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 15:32:41 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 91C081065670 for ; Wed, 11 Jul 2012 15:32:41 +0000 (UTC) (envelope-from jusher71@yahoo.com) Received: from nm1-vm2.bullet.mail.ne1.yahoo.com (nm1-vm2.bullet.mail.ne1.yahoo.com [98.138.91.17]) by mx1.freebsd.org (Postfix) with SMTP id 538338FC0C for ; Wed, 11 Jul 2012 15:32:41 +0000 (UTC) Received: from [98.138.90.52] by nm1.bullet.mail.ne1.yahoo.com with NNFMP; 11 Jul 2012 15:32:35 -0000 Received: from [98.138.89.234] by tm5.bullet.mail.ne1.yahoo.com with NNFMP; 11 Jul 2012 15:32:35 -0000 Received: from [127.0.0.1] by omp1049.mail.ne1.yahoo.com with NNFMP; 11 Jul 2012 15:32:35 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 340653.30158.bm@omp1049.mail.ne1.yahoo.com Received: (qmail 79535 invoked by uid 60001); 11 Jul 2012 15:32:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1342020755; bh=Jq+Lv/Wu/g53PdamkEVGhHaxQLOQ7aHLO2F2ni7OGEk=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=KzMKm7/XN1ac4RcF624bjXFa/m8s4QEQ8JeIDzwlIaZtm5PxYMhLsC3LSqWDe2rfXmTQjHPFGHuqXlI31N6u2szdF9cIRLSYpWAycc0ibaHRj1zZ9nGCIpp4TNvQiCWu0UTNLvjW7Fbw/1iCxD3SXUncEK5J0Ioz411Ge/gmR6M= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=2B/fahkEutQFz0+oW683OxA5H11Y7uwh8cpYuaVzaE3hGeGXygG4Rkipw58lqCiR/Or+SuNduZDGF8Y2SF1BpOyQagmODAdlZYyvpe8YRRQWzlyd7SOlqhS/cc4kGCYHETp+TPjIHV4+LQt11HygzGsIdpD8uEArDzWTQjgumBE=; X-YMail-OSG: eFivrcsVM1kUXgUF9zYfQUu3cxDdVhkEQ2UlSnlzN8dQ6pp vS8Vh9BMQy7qHrVLN5MSTD0OaenNmgqKMzOMYZCqNPubVODAT_AJRTOQNEnt 1LQ9K.0HPUdaFB4ikPo_NWMAH5_4CqSW9p3rEJA6LtHd.IoOv2YMsDQ06vFR Jm7b4zZ4HV9cwc.2tQI6TVJedvXdMUz5EbKIXDoY4UpSYfztUBqDGKaqBoGd nNF7BZhulVIPK_MXOiIkrptl4jMwSjoOvy9Uvtn6pUWkKw37MJzyJftwUUjc V3XstsiWbnrohggEsckN8dwXOg9MT8QuMm6Euq.qYd.9goBqDL8.EPPjxW91 TM4VRJTU1pLjHeE2KZsTrRC2btGlaN2IhTYQSceaQanD3rKJDVNnRIK5Q5tW Yq2i2K.UXts9H.jInOI46H6ND2RtLyk9MKRiHrHybIC_ZVUplFOBwZXyXEFe Ki10x9gW9o1C9CkMVqIkcH1wxXNZxIesI8yXT1PXos6L0tx1V12zk4QJqtna W.3E- Received: from [12.202.173.2] by web122502.mail.ne1.yahoo.com via HTTP; Wed, 11 Jul 2012 08:32:34 PDT X-Mailer: YahooMailClassic/15.0.8 YahooMailWebService/0.8.120.356233 Message-ID: <1342020754.79202.YahooMailClassic@web122502.mail.ne1.yahoo.com> Date: Wed, 11 Jul 2012 08:32:34 -0700 (PDT) From: Jason Usher To: Bob Friesenhahn In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: vdev/pool math with combined raidzX vdevs... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 15:32:41 -0000 =0AHi Bob,=0A=0A--- On Wed, 7/11/12, Bob Friesenhahn wrote:=0A=0A> The probabilty is indeed additive just as you say.= =A0 My=0A> point is that the fundamental integrity is offered at the=0A> vd= ev level.=A0 If a vdev fails, then the whole pool is=0A> gone.=A0 The MTTDL= calculations for various vdev=0A> topologies vary by orders of magnitude, = which tends to make=0A> the additive nature of more vdevs insignificant.=0A= =0A=0AThanks again for responding.=0A=0AI'm not going to beat this to death= , but just to summarize, if F is 2, then the corresponding data loss probab= ilities for RAID-Z1, -Z2, -Z3 are: 14.9%, 1.3%, and 0.086%.=0A=0ABut if com= bining multiple vdevs into a zpool (as opposed to maintaining a different z= pool for each raidz3 vdev) is additive, then raidz3 becomes .258%.=0A=0ASin= ce (I think) a lot of raidz3 adoption is due to folks desiring "some overki= ll" as they attempt to overcome the "disks got really big but didn't get an= y faster (for rebuilds)"[1] ... but they are losing some of that by combini= ng vdevs in a single pool.=0A=0ANot losing so much that they're back down t= o the failure rate of a single raidz*2* vdev, but they're not at the overki= ll level they thought they were at either.=0A=0AI think that's important, o= r at least worth noting...=0A=0A=0A[1] http://storagegaga.com/4tb-disks-the= -end-of-raid/ From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 16:25:43 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 33A5D1065670 for ; Wed, 11 Jul 2012 16:25:43 +0000 (UTC) (envelope-from chris@behanna.org) Received: from alayta.pair.com (alayta.pair.com [209.68.4.24]) by mx1.freebsd.org (Postfix) with ESMTP id 0F6EA8FC19 for ; Wed, 11 Jul 2012 16:25:43 +0000 (UTC) Received: from tourmalet.ticom-geo.com (unknown [64.132.190.26]) by alayta.pair.com (Postfix) with ESMTPSA id A4324D9837 for ; Wed, 11 Jul 2012 12:16:23 -0400 (EDT) Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 (Apple Message framework v1278) From: Chris BeHanna In-Reply-To: <1342020754.79202.YahooMailClassic@web122502.mail.ne1.yahoo.com> Date: Wed, 11 Jul 2012 11:16:22 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <1120F2CC-BFB2-401F-8114-58F3408DF1EF@behanna.org> References: <1342020754.79202.YahooMailClassic@web122502.mail.ne1.yahoo.com> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1278) Subject: Re: vdev/pool math with combined raidzX vdevs... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 16:25:43 -0000 On Jul 11, 2012, at 10:32 , Jason Usher wrote: > Since (I think) a lot of raidz3 adoption is due to folks desiring = "some overkill" as they attempt to overcome the "disks got really big = but didn't get any faster (for rebuilds)"[1] ... but they are losing = some of that by combining vdevs in a single pool. >=20 > Not losing so much that they're back down to the failure rate of a = single raidz*2* vdev, but they're not at the overkill level they thought = they were at either. >=20 > I think that's important, or at least worth noting... >=20 >=20 > [1] http://storagegaga.com/4tb-disks-the-end-of-raid/ That, and unrecoverable read errors (UREs) during = reconstruction, are indeed the problem. Gibson, et al, have gone on to = object storage to get around this--RAID is done over the individual = stored objects, rather than over the volume itself. If you need to = reconstruct, you can reconstruct both on-demand and lazily in the = background (i.e., you start reconstructing the objects in a volume, and = if a user attempts to access an as-yet-unreconstructed object, that = object gets inserted at the head of the queue). There aren't, however, to my knowledge, any = good-enough-to-use-at-work-without-hiring-a-pet-kernel-hacker = object-based file systems available for free[1]. CMU PDL did raidframe, = but that was a proof-of-concept and had not been bulletproofed and = optimized (though many of the concepts there found their way into = Panasas's PanFS). In the absence of a ready-to-go (or at least ready-to-assemble) = object-based solution, ZFS is the next best thing. You at least can get = some warning from the parity scrub that objects are corrupted, and can = have some duplicates lying around to recover. That said, you're going = to want to keep your failure domains fairly small, if you can, owing to = the time-to-reconstruct and the inevitability of UREs[2] when volumes = get large enough. --=20 Chris BeHanna chris@behanna.org [1] Because it's very, very hard. Panasas has been at it, full time, = for more than ten years. Spinnaker was at it for a long time, too, = prior to the NetApp acquisition. There's also Storage Tank and GFS, and = there was Zambeel, and a few others. [2] Garth Gibson talks about UREs on page 2: = http://gcn.com/articles/2008/07/25/garth-gibson--faster-storage-systems-th= rough-parallelism.aspx= From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 16:25:52 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id B110B1065772 for ; Wed, 11 Jul 2012 16:25:51 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 0AC408FC0A for ; Wed, 11 Jul 2012 16:25:50 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id q6BGPndf000763; Wed, 11 Jul 2012 11:25:49 -0500 (CDT) Date: Wed, 11 Jul 2012 11:25:49 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jason Usher In-Reply-To: <1342020754.79202.YahooMailClassic@web122502.mail.ne1.yahoo.com> Message-ID: References: <1342020754.79202.YahooMailClassic@web122502.mail.ne1.yahoo.com> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Wed, 11 Jul 2012 11:25:49 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: vdev/pool math with combined raidzX vdevs... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 16:25:52 -0000 On Wed, 11 Jul 2012, Jason Usher wrote: > > I'm not going to beat this to death, but just to summarize, if F is > 2, then the corresponding data loss probabilities for RAID-Z1, -Z2, > -Z3 are: 14.9%, 1.3%, and 0.086%. I am not sure what the above percentages mean. Regardless, I do agree that drive size is a factor in vdev reliability (MTTDL). Huge drives lessen reliability due to increased recovery time and placing more data at risk. There may be a cap on the maximum size of "enterprise" drives since "enterprise" drives are normally targeted for RAID applications. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 18:14:15 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BC64B106564A for ; Wed, 11 Jul 2012 18:14:15 +0000 (UTC) (envelope-from pho@holm.cc) Received: from relay02.pair.com (relay02.pair.com [209.68.5.16]) by mx1.freebsd.org (Postfix) with SMTP id 710338FC1E for ; Wed, 11 Jul 2012 18:14:15 +0000 (UTC) Received: (qmail 82469 invoked from network); 11 Jul 2012 18:07:34 -0000 Received: from 87.58.146.107 (HELO x2.osted.lan) (87.58.146.107) by relay02.pair.com with SMTP; 11 Jul 2012 18:07:34 -0000 X-pair-Authenticated: 87.58.146.107 Received: from x2.osted.lan (localhost [127.0.0.1]) by x2.osted.lan (8.14.5/8.14.5) with ESMTP id q6BI7WeG048085; Wed, 11 Jul 2012 20:07:32 +0200 (CEST) (envelope-from pho@x2.osted.lan) Received: (from pho@localhost) by x2.osted.lan (8.14.5/8.14.5/Submit) id q6BI7W7B048084; Wed, 11 Jul 2012 20:07:32 +0200 (CEST) (envelope-from pho) Date: Wed, 11 Jul 2012 20:07:32 +0200 From: Peter Holm To: John Baldwin Message-ID: <20120711180732.GA46834@x2.osted.lan> References: <201203071318.08241.jhb@freebsd.org> <201207091138.15655.jhb@freebsd.org> <20120709204007.GW2338@deviant.kiev.zoral.com.ua> <201207091648.32306.jhb@freebsd.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <201207091648.32306.jhb@freebsd.org> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@freebsd.org Subject: Re: close() of an flock'd file is not atomic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 18:14:15 -0000 On Mon, Jul 09, 2012 at 04:48:32PM -0400, John Baldwin wrote: > On Monday, July 09, 2012 4:40:07 pm Konstantin Belousov wrote: > > On Mon, Jul 09, 2012 at 11:38:15AM -0400, John Baldwin wrote: > > > Here now is the tested version of the actual fix after the vn_open_vnode() > > > changes were committed. This is hopefully easier to parse now. > > > > > > http://www.FreeBSD.org/~jhb/patches/flock_open_close4.patch > > > > Do you need atomic op to set FHASLOCK in vn_open_cred ? I do not think > > *fp can be shared with other thread there. > > Oh, that's true. I had just preserved it from the original code. > > > I thought that vrele() call in vn_closefile() would need a > > vn_start_write() or vn_start_secondary_write() dance around it, but > > now I believe it is not needed, since ufs_inactive() handles start of > > secondary writes on its own. Still, it would be good if Peter could test > > the patch with snapshotting load just be to safe there. > > Ok. I'm happy to have pho@ test it, but the test will have to use file > locking along with snapshots to exercise this case. > Verified your scenario on a pristine head and it fails like this: $ uname -a FreeBSD x4.osted.lan 10.0-CURRENT FreeBSD 10.0-CURRENT #0 r234951 $ /usr/bin/time -h ./flock_open_close.sh flock_open_close: execv(/mnt/test): Text file busy FAIL 3,79s real 0,24s user 0,78s sys $ Not a problem with your patch. The patch has further been stress tested for 24 hours without any problems showing up. - Peter From owner-freebsd-fs@FreeBSD.ORG Wed Jul 11 22:16:04 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CB64D1065686 for ; Wed, 11 Jul 2012 22:16:04 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) by mx1.freebsd.org (Postfix) with ESMTP id 877498FC08 for ; Wed, 11 Jul 2012 22:16:04 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1Sp5CU-0000J2-Ry for freebsd-fs@freebsd.org; Thu, 12 Jul 2012 00:16:02 +0200 Received: from cpe-188-129-83-64.dynamic.amis.hr ([188.129.83.64]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Thu, 12 Jul 2012 00:16:02 +0200 Received: from ivoras by cpe-188-129-83-64.dynamic.amis.hr with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Thu, 12 Jul 2012 00:16:02 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Ivan Voras Date: Thu, 12 Jul 2012 00:15:45 +0200 Lines: 27 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: cpe-188-129-83-64.dynamic.amis.hr User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20120614 Thunderbird/13.0.1 Subject: wdrain hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jul 2012 22:16:04 -0000 Hello, I started writing a tutorial on ggate and have encountered a bug I thought was solved long ago, but aparrently it was only worked around: http://ivoras.net/blog/tree/2012-07-06.writing-a-geom-gate-module-part-4.html The problem is that writing to a file system from within a ggate module (and a similar thing used to happen with md(4)) hangs when a certain amount of data gets in-flight. I think this happens when the amount of in-flight data from the upper layer (i.e. the file system sitting on top a ggate device) + the amount of data on the lower layer (the file system to which the userland ggate module writes) gets greater than hirunningspace, which somehow causes a deadlock in waitrunningbufspace(). I don't understand exactly how this deadlock happens, since it looks like one of the processes which does the writing (either the one writing to the ggate module or the ggate module itself) should probably hang in mtx_lock() but apparently both hang in the "wdrain" state. Can someone explain what happens here? So far this issue has been worked around by using O_DIRECT, but in the case of this tutorial I'm doing it's not possible, so I'm wondering if there is another workaround? From owner-freebsd-fs@FreeBSD.ORG Thu Jul 12 13:17:08 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6CACC106564A for ; Thu, 12 Jul 2012 13:17:08 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay02.ispgateway.de (smtprelay02.ispgateway.de [80.67.18.14]) by mx1.freebsd.org (Postfix) with ESMTP id D9B928FC08 for ; Thu, 12 Jul 2012 13:17:07 +0000 (UTC) Received: from [84.44.178.238] (helo=fabiankeil.de) by smtprelay02.ispgateway.de with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1SpJGP-0001QE-5k for freebsd-fs@freebsd.org; Thu, 12 Jul 2012 15:17:01 +0200 Date: Thu, 12 Jul 2012 15:15:41 +0200 From: Fabian Keil To: freebsd-fs@freebsd.org Message-ID: <20120712151541.7f3a6886@fabiankeil.de> In-Reply-To: <1341864787.32803.43.camel@btw.pki2.com> References: <1341864787.32803.43.camel@btw.pki2.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/H.zTSzvrsVIoBuPQO6PM00z"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Jul 2012 13:17:08 -0000 --Sig_/H.zTSzvrsVIoBuPQO6PM00z Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Dennis Glatting wrote: > I have a ZFS array of disks where the system simply stops as if forever > blocked by some IO mutex. This happens often and the following is the > output of top: >=20 > last pid: 6075; load averages: 0.00, 0.00, 0.00 up 0+16:54:41 > 13:04:10 > 135 processes: 1 running, 134 sleeping > CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle > Mem: 47M Active, 24M Inact, 18G Wired, 120M Buf, 44G Free > Swap: 32G Total, 32G Free >=20 > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > 2410 root 1 33 0 11992K 2820K zio->i 7 331:25 0.00% > bzip2 > 2621 root 1 52 4 28640K 5544K tx->tx 24 245:33 0.00% > john > 2624 root 1 48 4 28640K 5544K tx->tx 4 239:08 0.00% > john > 2623 root 1 49 4 28640K 5544K tx->tx 7 238:44 0.00% > john Does top continue to run or does it hang as well? I believe the locks shown above shouldn't affect already running applications that don't cause disk traffic. > My question is what should I be looking at and how should I look at it? > There is nothing in the logs or the console, rather the system is > forever paused and entering commands results in no response (it's as if > everything is deadlocked). If the entered commands actually start you can try sending SIGINFO with CTRL+T to request a status: fk@r500 ~ $zpool status load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12s 0% 290= 8k If you can run procstat you can try getting kernel stack traces for some (or all) processes that should give you a rough idea of how the lock is reached: fk@r500 ~ $procstat -kk $(pgrep zpool) PID TID COMM TDNAME KSTACK 2698 100922 zpool - mi_switch+0x196 sleepq_wait+= 0x42 _sx_xlock_hard+0x525 _sx_xlock+0x75 spa_all_configs+0x5e zfs_ioc_pool_= configs+0x29 zfsdev_ioctl+0xe6 devfs_ioctl_f+0x7b kern_ioctl+0x115 sys_ioct= l+0xfd amd64_syscall+0x5f9 Xfast_syscall+0xf7 2388 100431 zpool - mi_switch+0x196 sleepq_wait+= 0x42 _sx_xlock_hard+0x525 _sx_xlock+0x75 spa_open_common+0x7a spa_get_stats= +0x5b zfs_ioc_pool_stats+0x2c zfsdev_ioctl+0xe6 devfs_ioctl_f+0x7b kern_ioc= tl+0x115 sys_ioctl+0xfd amd64_syscall+0x5f9 Xfast_syscall+0xf7 If procstat hangs as well you could try executing it in a loop before the problem occurs. If it stays in the cache it may keep running after most of the other processes stop responding. DTrace can be useful for analyzing locking issues as well: fk@r500 ~ $sudo ~/scripts/flowtrace.d zfsdev_ioctl 0 101057 2012 Jul 12 14:59:42.155 00: Trace in progress. Waiting to enter z= fsdev_ioctl. Hit CTRL-C to exit. 0 100979 2012 Jul 12 14:59:43.933 04: --> zfsdev_ioctl:entry 0 100979 2012 Jul 12 14:59:43.933 05: --> zfs_secpolicy_none:entry 0 100979 2012 Jul 12 14:59:43.933 05: <-- zfs_secpolicy_none:return 0 100979 2012 Jul 12 14:59:43.933 05: --> zfs_ioc_pool_configs:entry 0 100979 2012 Jul 12 14:59:43.933 06: --> spa_all_configs:entry 0 100979 2012 Jul 12 14:59:43.933 07: --> nvlist_alloc:entry 0 100979 2012 Jul 12 14:59:43.933 07: [...] 0 100979 2012 Jul 12 14:59:43.933 07: <-- nvlist_alloc:return 0 100979 2012 Jul 12 14:59:43.933 07: --> _sx_xlock:entry 0 100979 2012 Jul 12 14:59:43.933 08: --> _sx_xlock_hard:entry 0 100979 2012 Jul 12 14:59:43.933 09: --> sleepq_lock:entry 0 100979 2012 Jul 12 14:59:43.933 10: --> spinlock_enter:entry 0 100979 2012 Jul 12 14:59:43.933 10: --> critical_enter:entry 0 100979 2012 Jul 12 14:59:43.933 10: <-- critical_enter:return 0 100979 2012 Jul 12 14:59:43.933 09: <-- sleepq_lock:return 0 100979 2012 Jul 12 14:59:43.933 09: --> lockstat_nsecs:entry 0 100979 2012 Jul 12 14:59:43.933 10: --> binuptime:entry 0 100979 2012 Jul 12 14:59:43.933 11: --> hpet_get_timecount:en= try 0 100979 2012 Jul 12 14:59:43.933 11: <-- hpet_get_timecount:re= turn 0 100979 2012 Jul 12 14:59:43.933 10: <-- binuptime:return 0 100979 2012 Jul 12 14:59:43.933 09: <-- lockstat_nsecs:return 0 100979 2012 Jul 12 14:59:43.933 09: --> sleepq_add:entry 0 100979 2012 Jul 12 14:59:43.933 10: --> sleepq_lookup:entry 0 100979 2012 Jul 12 14:59:43.933 10: <-- sleepq_lookup:return 0 100979 2012 Jul 12 14:59:43.933 10: --> thread_lock_flags_:ent= ry 0 100979 2012 Jul 12 14:59:43.933 11: --> spinlock_enter:entry 0 100979 2012 Jul 12 14:59:43.933 11: --> critical_enter:entry 0 100979 2012 Jul 12 14:59:43.933 11: <-- critical_enter:return 0 100979 2012 Jul 12 14:59:43.933 10: <-- thread_lock_flags_:ret= urn 0 100979 2012 Jul 12 14:59:43.933 09: --> spinlock_exit:entry 0 100979 2012 Jul 12 14:59:43.933 10: --> critical_exit:entry 0 100979 2012 Jul 12 14:59:43.933 10: <-- critical_exit:return 0 100979 2012 Jul 12 14:59:43.933 09: <-- spinlock_exit:return 0 100979 2012 Jul 12 14:59:43.933 09: --> sleepq_wait:entry 0 100979 2012 Jul 12 14:59:43.933 10: --> thread_lock_flags_:ent= ry 0 100979 2012 Jul 12 14:59:43.933 11: --> spinlock_enter:entry 0 100979 2012 Jul 12 14:59:43.933 11: --> critical_enter:entry 0 100979 2012 Jul 12 14:59:43.933 11: <-- critical_enter:return 0 100979 2012 Jul 12 14:59:43.933 10: <-- thread_lock_flags_:ret= urn 0 100979 2012 Jul 12 14:59:43.933 10: --> sleepq_switch:entry 0 100979 2012 Jul 12 14:59:43.933 11: --> sched_sleep:entry 0 100979 2012 Jul 12 14:59:43.933 11: <-- sched_sleep:return 0 100979 2012 Jul 12 14:59:43.933 11: --> thread_lock_set:entry 0 100979 2012 Jul 12 14:59:43.933 11: --> spinlock_exit:entry 0 100979 2012 Jul 12 14:59:43.933 12: --> critical_exit:entry 0 100979 2012 Jul 12 14:59:43.933 12: <-- critical_exit:return 0 100979 2012 Jul 12 14:59:43.933 11: <-- spinlock_exit:return 0 100979 2012 Jul 12 14:59:43.933 10: --> mi_switch:entry 0 100979 2012 Jul 12 14:59:43.933 11: --> rdtsc:entry 0 100979 2012 Jul 12 14:59:43.933 11: <-- rdtsc:return 0 100979 2012 Jul 12 14:59:43.933 11: --> sched_switch:entry 0 100979 2012 Jul 12 14:59:43.933 12: --> sched_pctcpu_update:= entry 0 100979 2012 Jul 12 14:59:43.933 12: <-- sched_pctcpu_update:= return 0 100979 2012 Jul 12 14:59:43.933 12: --> spinlock_enter:entry 0 100979 2012 Jul 12 14:59:43.933 12: --> critical_enter:entry 0 100979 2012 Jul 12 14:59:43.933 12: <-- critical_enter:return 0 100979 2012 Jul 12 14:59:43.933 12: --> thread_lock_block:en= try 0 100979 2012 Jul 12 14:59:43.933 13: --> spinlock_exit:entry 0 100979 2012 Jul 12 14:59:43.933 14: --> critical_exit:entry 0 100979 2012 Jul 12 14:59:43.933 14: <-- critical_exit:retu= rn 0 100979 2012 Jul 12 14:59:43.933 13: <-- spinlock_exit:return 0 100979 2012 Jul 12 14:59:43.933 12: <-- thread_lock_block:re= turn 0 100979 2012 Jul 12 14:59:43.933 12: --> tdq_load_rem:entry 0 100979 2012 Jul 12 14:59:43.933 12: <-- tdq_load_rem:return 0 100979 2012 Jul 12 14:59:43.933 12: --> choosethread:entry 0 100979 2012 Jul 12 14:59:43.933 13: --> sched_choose:entry 0 100979 2012 Jul 12 14:59:43.933 14: --> tdq_choose:entry 0 100979 2012 Jul 12 14:59:43.933 15: --> runq_choose:entry 0 100979 2012 Jul 12 14:59:43.933 15: <-- runq_choose:return 0 100979 2012 Jul 12 14:59:43.933 15: --> runq_choose_from:= entry 0 100979 2012 Jul 12 14:59:43.933 15: <-- runq_choose_from:= return 0 100979 2012 Jul 12 14:59:43.933 14: --> runq_choose:entry 0 100979 2012 Jul 12 14:59:43.933 14: <-- runq_choose:return 0 100979 2012 Jul 12 14:59:43.933 13: <-- sched_choose:return 0 100979 2012 Jul 12 14:59:43.933 12: <-- choosethread:return 0 100979 2012 Jul 12 14:59:43.933 12: --> sched_pctcpu_update:= entry 0 100979 2012 Jul 12 14:59:43.933 12: <-- sched_pctcpu_update:= return Of course once you know where ZFS is hanging you still have to figure out the why ... Fabian --Sig_/H.zTSzvrsVIoBuPQO6PM00z Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAk/+zgAACgkQBYqIVf93VJ1E8gCfavphCJo5CRKraHp5s6ii1eKK Z1AAnRBgjfT4Uc0x+fyAm1Z4OxbJKgPF =wxht -----END PGP SIGNATURE----- --Sig_/H.zTSzvrsVIoBuPQO6PM00z-- From owner-freebsd-fs@FreeBSD.ORG Thu Jul 12 23:18:15 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 52D0E106566B; Thu, 12 Jul 2012 23:18:15 +0000 (UTC) (envelope-from asmrookie@gmail.com) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com [209.85.217.182]) by mx1.freebsd.org (Postfix) with ESMTP id 2492C8FC0A; Thu, 12 Jul 2012 23:18:13 +0000 (UTC) Received: by lbon10 with SMTP id n10so4844403lbo.13 for ; Thu, 12 Jul 2012 16:18:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=oN5gr8Xa4u6Yrq8qMUrNEgu6aYcZUyyntR+zpmTNojc=; b=pFqWVrK+7xMdfz6zqk79WW4f0gJ2mWCZmxYX+OalTIUdXRK67u3dJ6oRo0auFhPWDb 0orfsA89dvrE/fLHDvPcdnUXt45oQNZ5FrTn2pyzMP23Uy8ehq3YQ2jjz22VYTLPs0er ZHD46YdvEXQ+PP0Y99B9biD2GMKeHuK/nMWix/WCC9gV88zBtKl62YwD5C9T7gLpu2el 5/1Uyf9MDbMXHc7AQJo5I5O1Ca6KHge+j0533jDkRCFEtiw8+44jnWtJA4lPZ7EmNTVo tg3hwvEMh/fEuGVY1rfPZ8c6Fa5xmqxssnhsGVrFXbkSDepQgVQF0PONg7U+ZkSh0vAr 9reg== MIME-Version: 1.0 Received: by 10.152.136.18 with SMTP id pw18mr156386lab.17.1342135092829; Thu, 12 Jul 2012 16:18:12 -0700 (PDT) Sender: asmrookie@gmail.com Received: by 10.112.27.65 with HTTP; Thu, 12 Jul 2012 16:18:12 -0700 (PDT) In-Reply-To: References: Date: Fri, 13 Jul 2012 00:18:12 +0100 X-Google-Sender-Auth: OpTdrpDJgACfGzZ9P5Xj4oGeLtA Message-ID: From: Attilio Rao To: FreeBSD FS , freebsd-current@freebsd.org, Peter Holm , =?UTF-8?Q?Gustau_P=C3=A9rez?= , George Neville-Neil Content-Type: text/plain; charset=UTF-8 Cc: Subject: Re: MPSAFE VFS -- List of upcoming actions X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: attilio@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Jul 2012 23:18:15 -0000 2012/7/4 Attilio Rao : > 2012/6/29 Attilio Rao : >> As already published several times, according to the following plan: >> http://wiki.freebsd.org/NONMPSAFE_DEORBIT_VFS >> > > I still haven't heard from Vivien or Edward, anyway as NTFS is > basically only used RO these days (also the mount_ntfs code just > permits RO mounting) I stripped all the uncomplete/bogus write support > with the following patch: > http://www.freebsd.org/~attilio/ntfs_remove_write.patch > > This is an attempt to make the code smaller and possibly just focus on > the locking that really matter (as read-only filesystem). > On some points of the patch I'm a bit less sure as we could easily > take into account also write for things like vaccess() arguments, and > make easier to re-add correct write support at some point in the > future, but still force RO, even if the approach used in the patch is > more correct IMHO. > As an added bonus this patch cleans some dirty code in the mount > operation and fixes a bug as vfs_mountedfrom() is called before real > mounting is completed and can still fail. A quick update on this. It looks like NTFS won't be completed for this GSoC thus I seriously need to find an alternative to not loose the NTFS support entirely. I tried to look into the NTFS implementation right now and it is really a poor support. As Peter has also verified, it can deadlock in no-time, it compeltely violates VFS rules, etc. IMHO it deserves a complete rewrite if we would still support in-kernel NTFS. I also tried to look at the NetBSD implementation. Their code is someway similar to our, but they used very complicated (and very dirty) code to do the locking. Even if I don't know well enough NetBSD VFS, I have the impression not all the races are correctly handled. Definitively, not something I would like to port. Considering all that the only viable option would be meaning an userland filesystem implementation. My preferred choice would be to import PUFFS and librefuse on top of it but honestly it requires a lot of time to be completed, time which I don't currently have as in 2 months Giant must be gone by the VFS. I then decided to switch to gnn's rewamp of FUSE patches. You can find his initial e-mail here: http://lists.freebsd.org/pipermail/freebsd-fs/2012-March/013876.html I've precisely got the second version of George's patch and created this dolphin branch: svn://svn.freebsd.org/base/projects/fuse I'm fixing low hanging fruit for the moment (see r238411 for example) and I still have to make a throughful review. However my idea is to commit the support once: - ntfs-3g is well stress-tested and proves to be bug-free - there is no major/big technical issue pending after the reviews I'm now looking for people sticking with the branch and trying to stress-test ntfs-3g as much as they can. For example I know that Gustau (cc'ed) already had issues. It would be good if he tries to reproduce them and make a full report. Please try to stick with the code contained with this branch for the tests unless diversly advised. As final note, George as agreed to maintain FUSE in the long-term and of course I'll give him an hand as time permits. Thanks, Attilio -- Peace can only be achieved by understanding - A. Einstein From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 05:34:11 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 76DC2106566C for ; Fri, 13 Jul 2012 05:34:11 +0000 (UTC) (envelope-from dg@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 3331B8FC12 for ; Fri, 13 Jul 2012 05:34:11 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q6D5XxlF015784; Thu, 12 Jul 2012 22:34:00 -0700 (PDT) (envelope-from dg@pki2.com) From: Dennis Glatting To: Fabian Keil In-Reply-To: <20120712151541.7f3a6886@fabiankeil.de> References: <1341864787.32803.43.camel@btw.pki2.com> <20120712151541.7f3a6886@fabiankeil.de> Content-Type: text/plain; charset="ISO-8859-1" Date: Thu, 12 Jul 2012 22:33:59 -0700 Message-ID: <1342157639.60708.11.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q6D5XxlF015784 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 05:34:11 -0000 On Thu, 2012-07-12 at 15:15 +0200, Fabian Keil wrote: > Dennis Glatting wrote: > > > I have a ZFS array of disks where the system simply stops as if forever > > blocked by some IO mutex. This happens often and the following is the > > output of top: > > > > last pid: 6075; load averages: 0.00, 0.00, 0.00 up 0+16:54:41 > > 13:04:10 > > 135 processes: 1 running, 134 sleeping > > CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle > > Mem: 47M Active, 24M Inact, 18G Wired, 120M Buf, 44G Free > > Swap: 32G Total, 32G Free > > > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > > COMMAND > > 2410 root 1 33 0 11992K 2820K zio->i 7 331:25 0.00% > > bzip2 > > 2621 root 1 52 4 28640K 5544K tx->tx 24 245:33 0.00% > > john > > 2624 root 1 48 4 28640K 5544K tx->tx 4 239:08 0.00% > > john > > 2623 root 1 49 4 28640K 5544K tx->tx 7 238:44 0.00% > > john > > Does top continue to run or does it hang as well? > It continues to run. > I believe the locks shown above shouldn't affect already > running applications that don't cause disk traffic. > > > My question is what should I be looking at and how should I look at it? > > There is nothing in the logs or the console, rather the system is > > forever paused and entering commands results in no response (it's as if > > everything is deadlocked). > > If the entered commands actually start you can try sending > SIGINFO with CTRL+T to request a status: > > fk@r500 ~ $zpool status > load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12s 0% 2908k > > If you can run procstat you can try getting kernel stack traces > for some (or all) processes that should give you a rough idea > of how the lock is reached: > > fk@r500 ~ $procstat -kk $(pgrep zpool) > PID TID COMM TDNAME KSTACK > 2698 100922 zpool - mi_switch+0x196 sleepq_wait+0x42 _sx_xlock_hard+0x525 _sx_xlock+0x75 spa_all_configs+0x5e zfs_ioc_pool_configs+0x29 zfsdev_ioctl+0xe6 devfs_ioctl_f+0x7b kern_ioctl+0x115 sys_ioctl+0xfd amd64_syscall+0x5f9 Xfast_syscall+0xf7 > 2388 100431 zpool - mi_switch+0x196 sleepq_wait+0x42 _sx_xlock_hard+0x525 _sx_xlock+0x75 spa_open_common+0x7a spa_get_stats+0x5b zfs_ioc_pool_stats+0x2c zfsdev_ioctl+0xe6 devfs_ioctl_f+0x7b kern_ioctl+0x115 sys_ioctl+0xfd amd64_syscall+0x5f9 Xfast_syscall+0xf7 > > If procstat hangs as well you could try executing it in a loop > before the problem occurs. If it stays in the cache it may keep > running after most of the other processes stop responding. > > DTrace can be useful for analyzing locking issues as well: > > fk@r500 ~ $sudo ~/scripts/flowtrace.d zfsdev_ioctl > 0 101057 2012 Jul 12 14:59:42.155 00: Trace in progress. Waiting to enter zfsdev_ioctl. Hit CTRL-C to exit. > 0 100979 2012 Jul 12 14:59:43.933 04: --> zfsdev_ioctl:entry > 0 100979 2012 Jul 12 14:59:43.933 05: --> zfs_secpolicy_none:entry > 0 100979 2012 Jul 12 14:59:43.933 05: <-- zfs_secpolicy_none:return > 0 100979 2012 Jul 12 14:59:43.933 05: --> zfs_ioc_pool_configs:entry > 0 100979 2012 Jul 12 14:59:43.933 06: --> spa_all_configs:entry > 0 100979 2012 Jul 12 14:59:43.933 07: --> nvlist_alloc:entry > 0 100979 2012 Jul 12 14:59:43.933 07: [...] > 0 100979 2012 Jul 12 14:59:43.933 07: <-- nvlist_alloc:return > 0 100979 2012 Jul 12 14:59:43.933 07: --> _sx_xlock:entry > 0 100979 2012 Jul 12 14:59:43.933 08: --> _sx_xlock_hard:entry > 0 100979 2012 Jul 12 14:59:43.933 09: --> sleepq_lock:entry > 0 100979 2012 Jul 12 14:59:43.933 10: --> spinlock_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 10: --> critical_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 10: <-- critical_enter:return > 0 100979 2012 Jul 12 14:59:43.933 09: <-- sleepq_lock:return > 0 100979 2012 Jul 12 14:59:43.933 09: --> lockstat_nsecs:entry > 0 100979 2012 Jul 12 14:59:43.933 10: --> binuptime:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> hpet_get_timecount:entry > 0 100979 2012 Jul 12 14:59:43.933 11: <-- hpet_get_timecount:return > 0 100979 2012 Jul 12 14:59:43.933 10: <-- binuptime:return > 0 100979 2012 Jul 12 14:59:43.933 09: <-- lockstat_nsecs:return > 0 100979 2012 Jul 12 14:59:43.933 09: --> sleepq_add:entry > 0 100979 2012 Jul 12 14:59:43.933 10: --> sleepq_lookup:entry > 0 100979 2012 Jul 12 14:59:43.933 10: <-- sleepq_lookup:return > 0 100979 2012 Jul 12 14:59:43.933 10: --> thread_lock_flags_:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> spinlock_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> critical_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 11: <-- critical_enter:return > 0 100979 2012 Jul 12 14:59:43.933 10: <-- thread_lock_flags_:return > 0 100979 2012 Jul 12 14:59:43.933 09: --> spinlock_exit:entry > 0 100979 2012 Jul 12 14:59:43.933 10: --> critical_exit:entry > 0 100979 2012 Jul 12 14:59:43.933 10: <-- critical_exit:return > 0 100979 2012 Jul 12 14:59:43.933 09: <-- spinlock_exit:return > 0 100979 2012 Jul 12 14:59:43.933 09: --> sleepq_wait:entry > 0 100979 2012 Jul 12 14:59:43.933 10: --> thread_lock_flags_:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> spinlock_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> critical_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 11: <-- critical_enter:return > 0 100979 2012 Jul 12 14:59:43.933 10: <-- thread_lock_flags_:return > 0 100979 2012 Jul 12 14:59:43.933 10: --> sleepq_switch:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> sched_sleep:entry > 0 100979 2012 Jul 12 14:59:43.933 11: <-- sched_sleep:return > 0 100979 2012 Jul 12 14:59:43.933 11: --> thread_lock_set:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> spinlock_exit:entry > 0 100979 2012 Jul 12 14:59:43.933 12: --> critical_exit:entry > 0 100979 2012 Jul 12 14:59:43.933 12: <-- critical_exit:return > 0 100979 2012 Jul 12 14:59:43.933 11: <-- spinlock_exit:return > 0 100979 2012 Jul 12 14:59:43.933 10: --> mi_switch:entry > 0 100979 2012 Jul 12 14:59:43.933 11: --> rdtsc:entry > 0 100979 2012 Jul 12 14:59:43.933 11: <-- rdtsc:return > 0 100979 2012 Jul 12 14:59:43.933 11: --> sched_switch:entry > 0 100979 2012 Jul 12 14:59:43.933 12: --> sched_pctcpu_update:entry > 0 100979 2012 Jul 12 14:59:43.933 12: <-- sched_pctcpu_update:return > 0 100979 2012 Jul 12 14:59:43.933 12: --> spinlock_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 12: --> critical_enter:entry > 0 100979 2012 Jul 12 14:59:43.933 12: <-- critical_enter:return > 0 100979 2012 Jul 12 14:59:43.933 12: --> thread_lock_block:entry > 0 100979 2012 Jul 12 14:59:43.933 13: --> spinlock_exit:entry > 0 100979 2012 Jul 12 14:59:43.933 14: --> critical_exit:entry > 0 100979 2012 Jul 12 14:59:43.933 14: <-- critical_exit:return > 0 100979 2012 Jul 12 14:59:43.933 13: <-- spinlock_exit:return > 0 100979 2012 Jul 12 14:59:43.933 12: <-- thread_lock_block:return > 0 100979 2012 Jul 12 14:59:43.933 12: --> tdq_load_rem:entry > 0 100979 2012 Jul 12 14:59:43.933 12: <-- tdq_load_rem:return > 0 100979 2012 Jul 12 14:59:43.933 12: --> choosethread:entry > 0 100979 2012 Jul 12 14:59:43.933 13: --> sched_choose:entry > 0 100979 2012 Jul 12 14:59:43.933 14: --> tdq_choose:entry > 0 100979 2012 Jul 12 14:59:43.933 15: --> runq_choose:entry > 0 100979 2012 Jul 12 14:59:43.933 15: <-- runq_choose:return > 0 100979 2012 Jul 12 14:59:43.933 15: --> runq_choose_from:entry > 0 100979 2012 Jul 12 14:59:43.933 15: <-- runq_choose_from:return > 0 100979 2012 Jul 12 14:59:43.933 14: --> runq_choose:entry > 0 100979 2012 Jul 12 14:59:43.933 14: <-- runq_choose:return > 0 100979 2012 Jul 12 14:59:43.933 13: <-- sched_choose:return > 0 100979 2012 Jul 12 14:59:43.933 12: <-- choosethread:return > 0 100979 2012 Jul 12 14:59:43.933 12: --> sched_pctcpu_update:entry > 0 100979 2012 Jul 12 14:59:43.933 12: <-- sched_pctcpu_update:return > > Of course once you know where ZFS is hanging you still have to figure > out the why ... > Thanks! I've made several changes to the volumes and rerunning my stuff now (started an hour ago). Rather than having a single volume of hodgepodge disks I now span two RAIDz pools of similar disks, partly because I was curious and partly because I thought it'd be fun. I also asserted ashift=12 and I have replaced/upgraded some of the disks. I have found, but not run, a bit of Seagate code that forces 4k sectors on the ST32000542AS disks -- they report 512 byte sectors but are really 4k. I also ran Seatools against some of my disks. Array now: iirc# zpool status disk-1 pool: disk-1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM disk-1 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 da2 ONLINE 0 0 0 da3 ONLINE 0 0 0 da4 ONLINE 0 0 0 da8 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 da1 ONLINE 0 0 0 da5 ONLINE 0 0 0 da6 ONLINE 0 0 0 da7 ONLINE 0 0 0 da9 ONLINE 0 0 0 logs gpt/zil-disk1 ONLINE 0 0 0 cache ada1 ONLINE 0 0 0 errors: No known data errors -- Dennis Glatting From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 09:19:44 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CFC93106564A for ; Fri, 13 Jul 2012 09:19:44 +0000 (UTC) (envelope-from c.kworr@gmail.com) Received: from mail-bk0-f54.google.com (mail-bk0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 4FD778FC17 for ; Fri, 13 Jul 2012 09:19:44 +0000 (UTC) Received: by bkcje9 with SMTP id je9so3092664bkc.13 for ; Fri, 13 Jul 2012 02:19:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=wQqSqWzHD53N0HDpN5ng0C4SBITZZlTzrWOLU1Go//k=; b=P0EDaJ1OA0m9nj896HUAKgi76ocIXZbbn3p3/7GemC4z8tMelUxUgtfOs4zq/Ii/aP KA+Z7kTHESojnwaJCnPx79xUCElUUzE2tMMq92h6xAp7TGRLMWqTVLKW2cVTputUv+K5 mDTaqPfVE1bdmFoYo67tsx0GPs891X6Zctbqg3+ThNuL/Q6gTTIK7LmMHNS9mljYCinZ A+nwe6i0nxmNAju3r63BzcntiXNZy9K1dpidu7sp+OCYlSSnrvHz+hJjsG76Vh3D/xX2 9XivtWveZJwvDcNix/CPqEEgdgsvWDdlTx13hksMIPdD+vEN4acLfVlkkj4J7kdoGqqR L2jg== Received: by 10.204.155.156 with SMTP id s28mr236829bkw.74.1342171183087; Fri, 13 Jul 2012 02:19:43 -0700 (PDT) Received: from green.tandem.local (41-200-200-46.pool.ukrtel.net. [46.200.200.41]) by mx.google.com with ESMTPS id 25sm4309652bkx.9.2012.07.13.02.19.40 (version=SSLv3 cipher=OTHER); Fri, 13 Jul 2012 02:19:41 -0700 (PDT) Message-ID: <4FFFE82B.6010109@gmail.com> Date: Fri, 13 Jul 2012 12:19:39 +0300 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:12.0) Gecko/20120605 Firefox/12.0 SeaMonkey/2.9.1 MIME-Version: 1.0 To: Dennis Glatting References: <1341864787.32803.43.camel@btw.pki2.com> In-Reply-To: <1341864787.32803.43.camel@btw.pki2.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 09:19:44 -0000 Dennis Glatting wrote: > I have a ZFS array of disks where the system simply stops as if forever > blocked by some IO mutex. This happens often and the following is the > output of top: Try switching to clang. Some time ago I was hit by different error - some process hangs indefinitely and can't be killed. After building system with clang I obtained a core dump at first reboot and research turned out that there was some broken directory entry in file system. Recreating damaged zfs filesystem (leaving all other pool intact) solved my problem completely. -- Sphinx of black quartz judge my vow. From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 13:47:22 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EB984106566B for ; Fri, 13 Jul 2012 13:47:22 +0000 (UTC) (envelope-from freebsd@penx.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id B87A38FC15 for ; Fri, 13 Jul 2012 13:47:22 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q6DDlIuG082016; Fri, 13 Jul 2012 06:47:18 -0700 (PDT) (envelope-from freebsd@penx.com) From: Dennis Glatting To: Volodymyr Kostyrko In-Reply-To: <4FFFE82B.6010109@gmail.com> References: <1341864787.32803.43.camel@btw.pki2.com> <4FFFE82B.6010109@gmail.com> Content-Type: text/plain; charset="us-ascii" Date: Fri, 13 Jul 2012 06:47:18 -0700 Message-ID: <1342187238.60733.27.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q6DDlIuG082016 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@penx.com Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 13:47:23 -0000 On Fri, 2012-07-13 at 12:19 +0300, Volodymyr Kostyrko wrote: > Dennis Glatting wrote: > > I have a ZFS array of disks where the system simply stops as if forever > > blocked by some IO mutex. This happens often and the following is the > > output of top: > > Try switching to clang. Some time ago I was hit by different error - > some process hangs indefinitely and can't be killed. After building > system with clang I obtained a core dump at first reboot and research > turned out that there was some broken directory entry in file system. > Recreating damaged zfs filesystem (leaving all other pool intact) solved > my problem completely. > I am using clang except on my CVS mirrors. I found on the mirrors that the mirror itself cannot update from itself but other hosts can update from the mirror. Somewhere in that M3/assembly muck something crashes in the process. The only way around the problem is to compile the /OS/ using GCC. On the system in question(iirc) I rebuilt the pool yesterday -- I'm in the process of updating parts across my systems. I also wanted to fool around with different ZFS architectures. This morning, with a load average throughout the night of 42 on a 32 core system writing 4TB of data, it is still alive and kicking but its early in the run. From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 14:46:54 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EB5F41065673 for ; Fri, 13 Jul 2012 14:46:54 +0000 (UTC) (envelope-from lytboris@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id A70628FC15 for ; Fri, 13 Jul 2012 14:46:54 +0000 (UTC) Received: by yhfs35 with SMTP id s35so4422146yhf.13 for ; Fri, 13 Jul 2012 07:46:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=8rgUtx8rM1oXv/x8Lfv3V0Xx+F+f1xiJuSOCeEb26Mo=; b=YbGKCYmoxRknqkjDdyF11jPS8CkVaHBtWnRualgUcBoe4ZKWFpvJeH4y01tJsFNe3y WTKQ/yK56DptawLsAA9HOCcrzD5uV5uqtu/tZOKHQV45JUJMaYlq9jfs55b3kRHkpSAu J1ToroAAco5h5sEPnhixCJJhvQ45tfJqcIrWSBMM32LWieWhg/ICBcOKuTJFwh6SYDht N3tqwXu6cVwsgA7hN+jqTKYyYo1+wpncfhzitgC4+WWBM/p4LkHIXc8DjZedg11HSg0F OEZPvd7gV4KIUzDHqXuogP4GobKeFrYHSKzsYYBOPwB7Kd3sQAsHIsXyMWlwGGywKAm8 v3+w== MIME-Version: 1.0 Received: by 10.66.76.196 with SMTP id m4mr2593634paw.61.1342190807456; Fri, 13 Jul 2012 07:46:47 -0700 (PDT) Received: by 10.66.148.200 with HTTP; Fri, 13 Jul 2012 07:46:47 -0700 (PDT) In-Reply-To: <20120712151541.7f3a6886@fabiankeil.de> References: <1341864787.32803.43.camel@btw.pki2.com> <20120712151541.7f3a6886@fabiankeil.de> Date: Fri, 13 Jul 2012 18:46:47 +0400 Message-ID: From: Lytochkin Boris To: Fabian Keil Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 14:46:55 -0000 Hi. On Thu, Jul 12, 2012 at 5:15 PM, Fabian Keil wrote: > fk@r500 ~ $zpool status > load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12s 0% 2908k This sounds familiar with http://www.freebsd.org/cgi/query-pr.cgi?pr=163770 Try playing with kern.maxvnodes. -- Boris Lytochkin From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 15:07:10 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1300B106566B for ; Fri, 13 Jul 2012 15:07:10 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay01.ispgateway.de (smtprelay01.ispgateway.de [80.67.29.23]) by mx1.freebsd.org (Postfix) with ESMTP id C10708FC08 for ; Fri, 13 Jul 2012 15:07:09 +0000 (UTC) Received: from [78.35.148.244] (helo=fabiankeil.de) by smtprelay01.ispgateway.de with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1SphS0-0008An-4F; Fri, 13 Jul 2012 17:06:36 +0200 Date: Fri, 13 Jul 2012 17:06:32 +0200 From: Fabian Keil To: Lytochkin Boris Message-ID: <20120713170632.065e650e@fabiankeil.de> In-Reply-To: References: <1341864787.32803.43.camel@btw.pki2.com> <20120712151541.7f3a6886@fabiankeil.de> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/nBRIgBzrnRkWba/7Gs/6f6v"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-fs@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 15:07:10 -0000 --Sig_/nBRIgBzrnRkWba/7Gs/6f6v Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Lytochkin Boris wrote: > On Thu, Jul 12, 2012 at 5:15 PM, Fabian Keil > wrote: > > fk@r500 ~ $zpool status > > load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12s 0%= 2908k >=20 > This sounds familiar with http://www.freebsd.org/cgi/query-pr.cgi?pr=3D16= 3770 > Try playing with kern.maxvnodes. Thanks for the suggestion, but the system is my laptop and I already set kern.maxvnodes=3D400000 which I suspect is more than I'll ever need. Currently I uses less than a tenth of this, but I'll keep an eye on it the next time the issue occurs. I usually reach this deadlock after losing the vdev in a single-vdev pool. My suspicion is that the deadlock is caused by some kind of "failure to communicate" between ZFS and the various geom layers involved. I already know that losing vdevs with the pool configuration I use can cause http://www.freebsd.org/cgi/query-pr.cgi?pr=3Dkern/162010 and http://www.freebsd.org/cgi/query-pr.cgi?pr=3Dkern/162036 and I suspect that the deadlock is just another symptom of the same issue. Fabian --Sig_/nBRIgBzrnRkWba/7Gs/6f6v Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlAAOXsACgkQBYqIVf93VJ0LYgCeNmcExneJ+80TvbWRkT4lJCj2 VfkAnjTZ4cFWW9rBh12reXjgvBLu+jzU =b0B3 -----END PGP SIGNATURE----- --Sig_/nBRIgBzrnRkWba/7Gs/6f6v-- From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 15:25:46 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id ADED41065673 for ; Fri, 13 Jul 2012 15:25:46 +0000 (UTC) (envelope-from freebsd@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 5E5308FC19 for ; Fri, 13 Jul 2012 15:25:46 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q6DFPavZ016136; Fri, 13 Jul 2012 08:25:36 -0700 (PDT) (envelope-from freebsd@pki2.com) From: Dennis Glatting To: freebsd-fs@freebsd.org In-Reply-To: <20120713170632.065e650e@fabiankeil.de> References: <1341864787.32803.43.camel@btw.pki2.com> <20120712151541.7f3a6886@fabiankeil.de> <20120713170632.065e650e@fabiankeil.de> Content-Type: text/plain; charset="ISO-8859-1" Date: Fri, 13 Jul 2012 08:25:36 -0700 Message-ID: <1342193136.60708.16.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q6DFPavZ016136 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@pki2.com Cc: Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 15:25:46 -0000 On Fri, 2012-07-13 at 17:06 +0200, Fabian Keil wrote: > Lytochkin Boris wrote: > > > On Thu, Jul 12, 2012 at 5:15 PM, Fabian Keil > > wrote: > > > fk@r500 ~ $zpool status > > > load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12s 0% 2908k > > > > This sounds familiar with http://www.freebsd.org/cgi/query-pr.cgi?pr=163770 > > Try playing with kern.maxvnodes. > > Thanks for the suggestion, but the system is my laptop and I already > set kern.maxvnodes=400000 which I suspect is more than I'll ever need. > > Currently I uses less than a tenth of this, but I'll keep an eye on > it the next time the issue occurs. > > I usually reach this deadlock after losing the vdev in a single-vdev pool. > My suspicion is that the deadlock is caused by some kind of "failure to > communicate" between ZFS and the various geom layers involved. > > I already know that losing vdevs with the pool configuration I use > can cause http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/162010 > and http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/162036 and I > suspect that the deadlock is just another symptom of the same issue. > What is the math and constraints behind kern.maxvnodes and how would a reasonable value be chosen? On some of my systems (default): iirc# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 1097048 bd3# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 587825 mc# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 2112911 btw# sysctl -a | grep kern.maxvnodes kern.maxvnodes: 460985 > Fabian From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 16:29:52 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 49A5D106564A for ; Fri, 13 Jul 2012 16:29:52 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay05.ispgateway.de (smtprelay05.ispgateway.de [80.67.31.93]) by mx1.freebsd.org (Postfix) with ESMTP id 0355A8FC0A for ; Fri, 13 Jul 2012 16:29:52 +0000 (UTC) Received: from [78.35.148.244] (helo=fabiankeil.de) by smtprelay05.ispgateway.de with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1Spik8-0000eU-FI; Fri, 13 Jul 2012 18:29:24 +0200 Date: Fri, 13 Jul 2012 18:29:21 +0200 From: Fabian Keil To: Dennis Glatting Message-ID: <20120713182921.55f16f4b@fabiankeil.de> In-Reply-To: <1342193136.60708.16.camel@btw.pki2.com> References: <1341864787.32803.43.camel@btw.pki2.com> <20120712151541.7f3a6886@fabiankeil.de> <20120713170632.065e650e@fabiankeil.de> <1342193136.60708.16.camel@btw.pki2.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/L2e1Nz7tdWoh8hCToPk=v.x"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-fs@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 16:29:52 -0000 --Sig_/L2e1Nz7tdWoh8hCToPk=v.x Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Dennis Glatting wrote: > On Fri, 2012-07-13 at 17:06 +0200, Fabian Keil wrote: > > Lytochkin Boris wrote: > >=20 > > > On Thu, Jul 12, 2012 at 5:15 PM, Fabian Keil > > > wrote: > > > > fk@r500 ~ $zpool status > > > > load: 0.15 cmd: zpool 2698 [spa_namespace_lock] 543.23r 0.00u 0.12= s 0% 2908k > > >=20 > > > This sounds familiar with http://www.freebsd.org/cgi/query-pr.cgi?pr= =3D163770 > > > Try playing with kern.maxvnodes. > >=20 > > Thanks for the suggestion, but the system is my laptop and I already > > set kern.maxvnodes=3D400000 which I suspect is more than I'll ever need. > >=20 > > Currently I uses less than a tenth of this, but I'll keep an eye on > > it the next time the issue occurs. > >=20 > > I usually reach this deadlock after losing the vdev in a single-vdev po= ol. > > My suspicion is that the deadlock is caused by some kind of "failure to > > communicate" between ZFS and the various geom layers involved. > >=20 > > I already know that losing vdevs with the pool configuration I use > > can cause http://www.freebsd.org/cgi/query-pr.cgi?pr=3Dkern/162010 > > and http://www.freebsd.org/cgi/query-pr.cgi?pr=3Dkern/162036 and I > > suspect that the deadlock is just another symptom of the same issue. Just to be clear: I meant the spa_namespace_lock deadlock on my system, not the one that started this thread. > What is the math and constraints behind kern.maxvnodes and how would a > reasonable value be chosen? The kernel already chooses a reasonable value for you and usually there's no reason to overwrite it. You can find the kernel's math at http://fxr.watson.org/fxr/source/kern/vfs_subr.c#L284 (ff). > On some of my systems (default): >=20 >=20 > iirc# sysctl -a | grep kern.maxvnodes > kern.maxvnodes: 1097048 You can compare this with vfs.numvnodes and vfs.freevnodes if you like (which of course depend on the load), but so far I don't remember seeing any indication that your problem has anything to do with maxvnodes (or block sizes for that matter). Fabian --Sig_/L2e1Nz7tdWoh8hCToPk=v.x Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlAATOMACgkQBYqIVf93VJ1quQCbB5ly7VH4deOwZ32Tg6KsxP0i qrQAn0F0NfKxWEPF9HseLXJbacBuW/vO =L2xq -----END PGP SIGNATURE----- --Sig_/L2e1Nz7tdWoh8hCToPk=v.x-- From owner-freebsd-fs@FreeBSD.ORG Fri Jul 13 21:57:29 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3EFA4106564A for ; Fri, 13 Jul 2012 21:57:29 +0000 (UTC) (envelope-from dim@FreeBSD.org) Received: from tensor.andric.com (cl-327.ede-01.nl.sixxs.net [IPv6:2001:7b8:2ff:146::2]) by mx1.freebsd.org (Postfix) with ESMTP id EB97A8FC0A for ; Fri, 13 Jul 2012 21:57:28 +0000 (UTC) Received: from [192.168.0.6] (spaceball.home.andric.com [192.168.0.6]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by tensor.andric.com (Postfix) with ESMTPSA id 249795C37; Fri, 13 Jul 2012 23:57:28 +0200 (CEST) Message-ID: <500099C6.6020400@FreeBSD.org> Date: Fri, 13 Jul 2012 23:57:26 +0200 From: Dimitry Andric Organization: The FreeBSD Project User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20120619 Thunderbird/14.0 MIME-Version: 1.0 To: Dennis Glatting References: <1341864787.32803.43.camel@btw.pki2.com> <4FFFE82B.6010109@gmail.com> <1342187238.60733.27.camel@btw.pki2.com> In-Reply-To: <1342187238.60733.27.camel@btw.pki2.com> X-Enigmail-Version: 1.5a1pre Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, Volodymyr Kostyrko Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jul 2012 21:57:29 -0000 On 2012-07-13 15:47, Dennis Glatting wrote: ... > I am using clang except on my CVS mirrors. > > I found on the mirrors that the mirror itself cannot update from itself > but other hosts can update from the mirror. Somewhere in that > M3/assembly muck something crashes in the process. The only way around > the problem is to compile the /OS/ using GCC. This is a known problem with ezm3, it aligns the stack incorrectly on amd64. See also bin/162588. Possible workarounds are: - Compiling libz with clang, but with SSE disabled - Compiling the whole system with clang, but with SSE disabled - Fixing ezm3 so it aligns the stack to 16 bytes (left as exercise for the reader ;) - Rewriting cvsup in C (another nice exercise...) From owner-freebsd-fs@FreeBSD.ORG Sat Jul 14 18:38:45 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 66BD9106566C for ; Sat, 14 Jul 2012 18:38:45 +0000 (UTC) (envelope-from launspachontwerp@vhostlin3.jkit.nl) Received: from vhostlin3.jkit.nl (vhostlin3.jkit.nl [83.96.177.125]) by mx1.freebsd.org (Postfix) with ESMTP id 286348FC1B for ; Sat, 14 Jul 2012 18:38:45 +0000 (UTC) Received: by vhostlin3.jkit.nl (Postfix, from userid 10073) id 3A5F5884582; Sat, 14 Jul 2012 20:25:27 +0200 (CEST) To: freebsd-fs@freebsd.org X-PHP-Originating-Script: 7005:mailer2012.php From: Linda Joe MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit Message-Id: <20120714182843.3A5F5884582@vhostlin3.jkit.nl> Date: Sat, 14 Jul 2012 20:25:27 +0200 (CEST) Subject: About Your Pending Funds X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: lindajoe00@gmail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Jul 2012 18:38:45 -0000 I am Engr Linda Joe. A computer scientist with central bank of Nigeria. I am 30 years old, just started work with C.B.N. I came across your file which was marked X and your released disk painted RED, I took time to study it and found out that you have paid VIRTUALLY all fees and certificate but the fund has not been release to you. The most annoying thing is that they cannot tell you the truth that on no account will they ever release the fund to you. Please this is like a Mafia setting in Nigeria; you may not understand it because you are not a Nigerian. The only thing I will need to release this fund is a special HARD DISK we call it HD120 GIG. I will buy two of it, recopy your information, destroy the previous one, and punch the computer to reflect in your bank within 24 bank Trus Plus@};- :x: banking hours. I will clean up the tracer and destroy your file, after which I will run away from Nigeria to meet with you. If you are interested. Do get in touch with me immediately, You should send to me your convenient tell/fax numbers for easy communications and also re confirm your banking details, so that there won't be any mistake, Get back to me as soon as possible. Regards, Engr:Ms Linda Joe From owner-freebsd-fs@FreeBSD.ORG Sat Jul 14 18:49:48 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 84FC3106566C for ; Sat, 14 Jul 2012 18:49:48 +0000 (UTC) (envelope-from launspachontwerp@vhostlin3.jkit.nl) Received: from vhostlin3.jkit.nl (vhostlin3.jkit.nl [83.96.177.125]) by mx1.freebsd.org (Postfix) with ESMTP id 467F08FC16 for ; Sat, 14 Jul 2012 18:49:48 +0000 (UTC) Received: by vhostlin3.jkit.nl (Postfix, from userid 10073) id 84F3D8E0705; Sat, 14 Jul 2012 20:26:06 +0200 (CEST) To: fs@freebsd.org X-PHP-Originating-Script: 7005:mailer2012.php From: Linda Joe MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit Message-Id: <20120714183257.84F3D8E0705@vhostlin3.jkit.nl> Date: Sat, 14 Jul 2012 20:26:06 +0200 (CEST) Cc: Subject: About Your Pending Funds X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: lindajoe00@gmail.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Jul 2012 18:49:48 -0000 I am Engr Linda Joe. A computer scientist with central bank of Nigeria. I am 30 years old, just started work with C.B.N. I came across your file which was marked X and your released disk painted RED, I took time to study it and found out that you have paid VIRTUALLY all fees and certificate but the fund has not been release to you. The most annoying thing is that they cannot tell you the truth that on no account will they ever release the fund to you. Please this is like a Mafia setting in Nigeria; you may not understand it because you are not a Nigerian. The only thing I will need to release this fund is a special HARD DISK we call it HD120 GIG. I will buy two of it, recopy your information, destroy the previous one, and punch the computer to reflect in your bank within 24 bank Trus Plus@};- :x: banking hours. I will clean up the tracer and destroy your file, after which I will run away from Nigeria to meet with you. If you are interested. Do get in touch with me immediately, You should send to me your convenient tell/fax numbers for easy communications and also re confirm your banking details, so that there won't be any mistake, Get back to me as soon as possible. Regards, Engr:Ms Linda Joe From owner-freebsd-fs@FreeBSD.ORG Sat Jul 14 18:50:11 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C6BB41065688 for ; Sat, 14 Jul 2012 18:50:11 +0000 (UTC) (envelope-from freebsd@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 73BA78FC0A for ; Sat, 14 Jul 2012 18:50:11 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q6EIo4OM054349 for ; Sat, 14 Jul 2012 11:50:04 -0700 (PDT) (envelope-from freebsd@pki2.com) From: Dennis Glatting To: freebsd-fs@freebsd.org In-Reply-To: <500099C6.6020400@FreeBSD.org> References: <1341864787.32803.43.camel@btw.pki2.com> <4FFFE82B.6010109@gmail.com> <1342187238.60733.27.camel@btw.pki2.com> <500099C6.6020400@FreeBSD.org> Content-Type: text/plain; charset="ISO-8859-1" Date: Sat, 14 Jul 2012 11:50:04 -0700 Message-ID: <1342291804.30589.3.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q6EIo4OM054349 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@pki2.com Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Jul 2012 18:50:11 -0000 On Fri, 2012-07-13 at 23:57 +0200, Dimitry Andric wrote: > On 2012-07-13 15:47, Dennis Glatting wrote: > ... > > I am using clang except on my CVS mirrors. > > > > I found on the mirrors that the mirror itself cannot update from itself > > but other hosts can update from the mirror. Somewhere in that > > M3/assembly muck something crashes in the process. The only way around > > the problem is to compile the /OS/ using GCC. > > This is a known problem with ezm3, it aligns the stack incorrectly on > amd64. See also bin/162588. Possible workarounds are: > > - Compiling libz with clang, but with SSE disabled > - Compiling the whole system with clang, but with SSE disabled I considered this, tried it, and "make buildworld" returned: clang++ -O2 -pipe -mno-sse -mno-sse2 -mno-sse3 -mno-sse -mno-sse2 -mno-sse3 -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/include -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/tools/clang/include -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/lib/Support -I. -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/../../lib/clang/include -DLLVM_ON_UNIX -DLLVM_ON_FREEBSD -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -DLLVM_DEFAULT_TARGET_TRIPLE=\"x86_64-unknown-freebsd9.0\" -DDEFAULT_SYSROOT=\"\" -I/usr/obj/usr/src/tmp/legacy/usr/include -fno-exceptions -c /usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/lib/Support/DAGDeltaAlgorithm.cpp -o DAGDeltaAlgorithm.o fatal error: error in backend: SSE register return with SSE disabled *** [APFloat.o] Error code 1 fatal error: error in backend: SSE register return with SSE disabled *** [APInt.o] Error code 1 fatal error: error in backend: SSE register return with SSE disabled *** [CommandLine.o] Error code 1 3 errors *** [bootstrap-tools] Error code 2 1 error *** [_bootstrap-tools] Error code 2 1 error *** [buildworld] Error code 2 1 error Perhaps My simple approach was wrong? CFLAGS+= -mno-sse -mno-sse2 -mno-sse3 > - Fixing ezm3 so it aligns the stack to 16 bytes (left as exercise for > the reader ;) > - Rewriting cvsup in C (another nice exercise...) > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Jul 14 19:22:08 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 79489106566B for ; Sat, 14 Jul 2012 19:22:08 +0000 (UTC) (envelope-from dg@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 249848FC15 for ; Sat, 14 Jul 2012 19:22:08 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q6EJM1bV065773 for ; Sat, 14 Jul 2012 12:22:01 -0700 (PDT) (envelope-from dg@pki2.com) From: Dennis Glatting To: freebsd-fs@freebsd.org In-Reply-To: <1342291804.30589.3.camel@btw.pki2.com> References: <1341864787.32803.43.camel@btw.pki2.com> <4FFFE82B.6010109@gmail.com> <1342187238.60733.27.camel@btw.pki2.com> <500099C6.6020400@FreeBSD.org> <1342291804.30589.3.camel@btw.pki2.com> Content-Type: text/plain; charset="ISO-8859-1" Date: Sat, 14 Jul 2012 12:22:01 -0700 Message-ID: <1342293721.30589.5.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q6EJM1bV065773 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com Subject: Re: ZFS hanging X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Jul 2012 19:22:08 -0000 On Sat, 2012-07-14 at 11:50 -0700, Dennis Glatting wrote: > On Fri, 2012-07-13 at 23:57 +0200, Dimitry Andric wrote: > > On 2012-07-13 15:47, Dennis Glatting wrote: > > ... > > > I am using clang except on my CVS mirrors. > > > > > > I found on the mirrors that the mirror itself cannot update from itself > > > but other hosts can update from the mirror. Somewhere in that > > > M3/assembly muck something crashes in the process. The only way around > > > the problem is to compile the /OS/ using GCC. > > > > This is a known problem with ezm3, it aligns the stack incorrectly on > > amd64. See also bin/162588. Possible workarounds are: > > > > - Compiling libz with clang, but with SSE disabled > > - Compiling the whole system with clang, but with SSE disabled > > I considered this, tried it, and "make buildworld" returned: > > clang++ -O2 -pipe -mno-sse -mno-sse2 -mno-sse3 -mno-sse -mno-sse2 > -mno-sse3 > -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/include > -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/tools/clang/include -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/lib/Support -I. -I/usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/../../lib/clang/include -DLLVM_ON_UNIX -DLLVM_ON_FREEBSD -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -DLLVM_DEFAULT_TARGET_TRIPLE=\"x86_64-unknown-freebsd9.0\" -DDEFAULT_SYSROOT=\"\" -I/usr/obj/usr/src/tmp/legacy/usr/include -fno-exceptions -c /usr/src/lib/clang/libllvmsupport/../../../contrib/llvm/lib/Support/DAGDeltaAlgorithm.cpp -o DAGDeltaAlgorithm.o > fatal error: error in backend: SSE register return with SSE disabled > *** [APFloat.o] Error code 1 > fatal error: error in backend: SSE register return with SSE disabled > *** [APInt.o] Error code 1 > fatal error: error in backend: SSE register return with SSE disabled > *** [CommandLine.o] Error code 1 > 3 errors > *** [bootstrap-tools] Error code 2 > 1 error > *** [_bootstrap-tools] Error code 2 > 1 error > *** [buildworld] Error code 2 > 1 error > > > Perhaps My simple approach was wrong? > > CFLAGS+= -mno-sse -mno-sse2 -mno-sse3 > > Sorry. Missed NO_CPU_CFLAGS. > > > > - Fixing ezm3 so it aligns the stack to 16 bytes (left as exercise for > > the reader ;) > > - Rewriting cvsup in C (another nice exercise...) > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Dennis Glatting