From owner-freebsd-fs@FreeBSD.ORG Mon Jan 16 01:09:15 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9A08A106566C; Mon, 16 Jan 2012 01:09:15 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 6D31E8FC08; Mon, 16 Jan 2012 01:09:15 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0G19F98071193; Mon, 16 Jan 2012 01:09:15 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0G19FYK071189; Mon, 16 Jan 2012 01:09:15 GMT (envelope-from linimon) Date: Mon, 16 Jan 2012 01:09:15 GMT Message-Id: <201201160109.q0G19FYK071189@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/164184: [ufs] [panic] Kernel panic with ufs_makeinode X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jan 2012 01:09:15 -0000 Old Synopsis: Kernel panic with ufs_makeinode New Synopsis: [ufs] [panic] Kernel panic with ufs_makeinode Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jan 16 01:08:55 UTC 2012 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=164184 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 16 01:28:43 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 558961065672; Mon, 16 Jan 2012 01:28:43 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 27E6B8FC08; Mon, 16 Jan 2012 01:28:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0G1Shxh089890; Mon, 16 Jan 2012 01:28:43 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0G1Sh40089886; Mon, 16 Jan 2012 01:28:43 GMT (envelope-from linimon) Date: Mon, 16 Jan 2012 01:28:43 GMT Message-Id: <201201160128.q0G1Sh40089886@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/163801: [md] [request] allow mfsBSD legacy installed in 'swap' partition. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jan 2012 01:28:43 -0000 Old Synopsis: allow mfsBSD legacy installed in 'swap' partition. New Synopsis: [md] [request] allow mfsBSD legacy installed in 'swap' partition. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jan 16 01:27:11 UTC 2012 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=163801 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 16 01:31:20 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 310D81065673; Mon, 16 Jan 2012 01:31:20 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 0283E8FC1E; Mon, 16 Jan 2012 01:31:20 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0G1VJjv095729; Mon, 16 Jan 2012 01:31:19 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0G1VJ5W095725; Mon, 16 Jan 2012 01:31:19 GMT (envelope-from linimon) Date: Mon, 16 Jan 2012 01:31:19 GMT Message-Id: <201201160131.q0G1VJ5W095725@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/163501: [nfs] NFS exporting a dir and a subdir in that dir to the same host; mountd error message needs improvement X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jan 2012 01:31:20 -0000 Old Synopsis: NFS exporting a dir and a subdir in that dir to the same host; mountd error message needs improvement New Synopsis: [nfs] NFS exporting a dir and a subdir in that dir to the same host; mountd error message needs improvement Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Jan 16 01:31:02 UTC 2012 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=163501 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 16 11:07:01 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 86014106566B for ; Mon, 16 Jan 2012 11:07:01 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 695C08FC17 for ; Mon, 16 Jan 2012 11:07:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0GB71Rn057620 for ; Mon, 16 Jan 2012 11:07:01 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0GB70B2057618 for freebsd-fs@FreeBSD.org; Mon, 16 Jan 2012 11:07:00 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 16 Jan 2012 11:07:00 GMT Message-Id: <201201161107.q0GB70B2057618@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jan 2012 11:07:01 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162362 fs [snapshots] [panic] ufs with snapshot(s) panics when g o kern/162083 fs [zfs] [panic] zfs unmount -f pool o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161897 fs [zfs] [patch] zfs partition probing causing long delay o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161674 fs [ufs] snapshot on journaled ufs doesn't work o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161511 fs [unionfs] Filesystem deadlocks when using multiple uni o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159971 fs [ffs] [panic] panic with soft updates journaling durin o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158711 fs [ffs] [panic] panic in ffs_blkfree and ffs_valloc o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157722 fs [geli] unable to newfs a geli encrypted partition o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs f kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files f sparc/123566 fs [zfs] zpool import issue: EOVERFLOW o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount f kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] [iconv] mount_msdosfs: msdosfs_iconv: Operat o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 259 problems total. From owner-freebsd-fs@FreeBSD.ORG Tue Jan 17 10:20:12 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7A4F8106566C for ; Tue, 17 Jan 2012 10:20:12 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 4F1348FC15 for ; Tue, 17 Jan 2012 10:20:12 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0HAKC5s075210 for ; Tue, 17 Jan 2012 10:20:12 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0HAKCe2075203; Tue, 17 Jan 2012 10:20:12 GMT (envelope-from gnats) Date: Tue, 17 Jan 2012 10:20:12 GMT Message-Id: <201201171020.q0HAKCe2075203@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Remy de Ruysscher Cc: Subject: Re: kern/164184: [ufs] [panic] Kernel panic with ufs_makeinode X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Remy de Ruysscher List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jan 2012 10:20:12 -0000 The following reply was made to PR kern/164184; it has been noted by GNATS. From: Remy de Ruysscher To: bug-followup@freebsd.org Cc: Subject: Re: kern/164184: [ufs] [panic] Kernel panic with ufs_makeinode Date: Tue, 17 Jan 2012 10:43:35 +0100 The kernel panics have stopped but the filesystem is still error-prone. ** /dev/da0s1d (NO WRITE) ** Last Mounted on /usr ** Phase 1 - Check Blocks and Sizes INCORRECT BLOCK COUNT I=2075841 (4 should be 0) CORRECT? no From owner-freebsd-fs@FreeBSD.ORG Tue Jan 17 22:14:04 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D0A3F106564A; Tue, 17 Jan 2012 22:14:04 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A699E8FC0C; Tue, 17 Jan 2012 22:14:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0HME4FI049243; Tue, 17 Jan 2012 22:14:04 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0HME4rQ049239; Tue, 17 Jan 2012 22:14:04 GMT (envelope-from linimon) Date: Tue, 17 Jan 2012 22:14:04 GMT Message-Id: <201201172214.q0HME4rQ049239@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/159663: [socket] [nullfs] sockets don't work though nullfs mounts X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jan 2012 22:14:04 -0000 Synopsis: [socket] [nullfs] sockets don't work though nullfs mounts Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Jan 17 22:13:29 UTC 2012 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=159663 From owner-freebsd-fs@FreeBSD.ORG Tue Jan 17 22:14:19 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ADFDA1065672; Tue, 17 Jan 2012 22:14:19 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 82C7A8FC0C; Tue, 17 Jan 2012 22:14:19 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0HMEJRp049550; Tue, 17 Jan 2012 22:14:19 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0HMEJPr049546; Tue, 17 Jan 2012 22:14:19 GMT (envelope-from linimon) Date: Tue, 17 Jan 2012 22:14:19 GMT Message-Id: <201201172214.q0HMEJPr049546@freefall.freebsd.org> To: kib@freebsd.org, attilio@freebsd.org, rmacklem@freebsd.org, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/164261: [nullfs] [patch] fix panic with NFS served from NULLFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 Jan 2012 22:14:19 -0000 Old Synopsis: [patch] fix panic with NFS served from NULLFS New Synopsis: [nullfs] [patch] fix panic with NFS served from NULLFS Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Jan 17 22:13:29 UTC 2012 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=164261 From owner-freebsd-fs@FreeBSD.ORG Wed Jan 18 01:06:04 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5242C1065673; Wed, 18 Jan 2012 01:06:04 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 27FE48FC0A; Wed, 18 Jan 2012 01:06:04 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0I164Eq010624; Wed, 18 Jan 2012 01:06:04 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0I164eV010620; Wed, 18 Jan 2012 01:06:04 GMT (envelope-from linimon) Date: Wed, 18 Jan 2012 01:06:04 GMT Message-Id: <201201180106.q0I164eV010620@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/164256: [zfs] device entry for volume is not created after zfs receive X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2012 01:06:04 -0000 Old Synopsis: zfs: device entry for volume is not created after zfs receive New Synopsis: [zfs] device entry for volume is not created after zfs receive Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Jan 18 01:05:48 UTC 2012 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=164256 From owner-freebsd-fs@FreeBSD.ORG Wed Jan 18 04:51:03 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 013081065672 for ; Wed, 18 Jan 2012 04:51:03 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id 8A2BC8FC16 for ; Wed, 18 Jan 2012 04:51:02 +0000 (UTC) Received: from skuns.kiev.zoral.com.ua (localhost [127.0.0.1]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id q0I4EdQo072098 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 18 Jan 2012 06:14:39 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5) with ESMTP id q0I4EdZA022425; Wed, 18 Jan 2012 06:14:39 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5/Submit) id q0I4EdZZ022424; Wed, 18 Jan 2012 06:14:39 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Wed, 18 Jan 2012 06:14:39 +0200 From: Kostik Belousov To: Eygene Ryabinkin Message-ID: <20120118041439.GU31224@deviant.kiev.zoral.com.ua> References: <20120117202853.19F65DA81C@void.codelabs.ru> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="5Q4v4o6GEVspILCA" Content-Disposition: inline In-Reply-To: <20120117202853.19F65DA81C@void.codelabs.ru> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: fs@freebsd.org Subject: Re: kern/164261: [patch] fix panic with NFS served from NULLFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2012 04:51:03 -0000 --5Q4v4o6GEVspILCA Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Jan 18, 2012 at 12:28:53AM +0400, Eygene Ryabinkin wrote: >=20 > >Number: 164261 > >Category: kern > >Synopsis: [patch] fix panic with NFS served from NULLFS > >Confidential: no > >Severity: serious > >Priority: medium > >Responsible: freebsd-bugs > >State: open > >Quarter: =20 > >Keywords: =20 > >Date-Required: > >Class: sw-bug > >Submitter-Id: current-users > >Arrival-Date: Tue Jan 17 20:40:14 UTC 2012 > >Closed-Date: > >Last-Modified: > >Originator: Eygene Ryabinkin > >Release: FreeBSD 10.0-CURRENT amd64 > >Organization: > Code Labs > >Environment: >=20 > System: FreeBSD 10.0-CURRENT, FreeBSD 9.0-STABLE >=20 > >Description: >=20 > When one exports NULLFS filesystems via NFS, he can face kernel > panics if external clients use readdir+ feature and are accessing > same directories simultaneously. >=20 > The example of the backtrace can be obtained at > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/panic-backtrace.t= xt > This backtrace is from 9.x as of December 2011. >=20 > The real problem is that the thread that loses the race in > null_nodeget (/sys/fs/nullfs/null_subr.c) will put the native lock > (vp->v_vnlock =3D &vp->v_lock) to the nullfs vnode that should be > destroyed (because the thread lost the race). And null_reclaim > (/sys/fs/nullfs/null_vnops.c) will try to lock vnode's v_lock in the > exclusive mode. This will lead to panic, because v_vnlock is already > locked at the time of VOP_RECLAIM processing and we have v_vnlock that > points to v_lock. Bingo! >=20 > >How-To-Repeat: >=20 > See http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/README.txt > section "How to reproduce". >=20 > >Fix: >=20 > Patches > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/0001-NULLFS-prope= rly-destroy-node-hash.patch This one is probably fine, assuming that the hashes are properly cleared on unmount. Feel free to commit. > and > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/0002-NULLFS-fix-p= anics-when-lowervp-is-locked-with-LK_SHA.patch > will fix the problem (in reality, the first patch is just some > nitpicking). And I do not even read this. The issue that the backtrace is pointing to seems to be the misuse of vrele= (), after the vnode lock is switched to null vnode v_lock. Since the vnode that is being thrown out is exclusively locked, cleanup path shall do vput() instead of vrele(). Despite above, vrele(lowervp) call is be fine, despite lowervp also being locked exclusively, because usecount for the vnode must be > 1, due to null_hashins() successfully found the ovp in the hash. Try the change below. diff --git a/sys/fs/nullfs/null_subr.c b/sys/fs/nullfs/null_subr.c index 319e404..6ba7508 100644 --- a/sys/fs/nullfs/null_subr.c +++ b/sys/fs/nullfs/null_subr.c @@ -252,7 +252,7 @@ null_nodeget(mp, lowervp, vpp) vrele(lowervp); vp->v_vnlock =3D &vp->v_lock; xp->null_lowervp =3D NULL; - vrele(vp); + vput(vp); return (0); } *vpp =3D vp; --5Q4v4o6GEVspILCA Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEARECAAYFAk8WRy4ACgkQC3+MBN1Mb4h1lgCfWijetcGpDGFZrJR0KAixVLNA 9mgAoIkfRpZUGkyXRAC1yHKlU95Ovkra =6pWD -----END PGP SIGNATURE----- --5Q4v4o6GEVspILCA-- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 18 10:24:35 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0A9E4106566B for ; Wed, 18 Jan 2012 10:24:35 +0000 (UTC) (envelope-from rea@codelabs.ru) Received: from 0.mx.codelabs.ru (0.mx.codelabs.ru [144.206.177.45]) by mx1.freebsd.org (Postfix) with ESMTP id 9440F8FC19 for ; Wed, 18 Jan 2012 10:24:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=codelabs.ru; s=two; h=Sender:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date; bh=rDVOA25X+oVUMJtt3qnntu7S8BFRnZrkVP9fUAN54jA=; b=LV+pGEh2KTLshwUP5s/tm2pacRa1j1njqlUANODDXmv/NMZQy3qQRgjGTzxdaKxkF60nwlWcNn0boZ7LNRt3gvjE/qapB+82cBaEG7fek8DZDr/PcS/YlCGZg+2W+ALbTs96r10dlB0aP8FCzuyctGprXyy1JHMozoPaxYWPGv1tvC0vTAz+zg5VFwkCSaXvmdLSQUmZGsBqbVeZ+5eV3zBCTqPf8DgFfQk+GKBBZSulhpGfx8zxwArO8rgo4xoOYg1xS8Fb6EvJByvA+6wADtEmskowx+jy5zfuiBRiXkKVpCwZ+/qAc7HFrDuPstB8SDM3QNtx+BNA2fjMPTnQCA==; Received: from void.codelabs.ru (void.codelabs.ru [144.206.177.25]) by 0.mx.codelabs.ru with esmtpsa (TLSv1:AES256-SHA:256) id 1RnSQT-000Cl4-Jq; Wed, 18 Jan 2012 13:07:29 +0300 Date: Wed, 18 Jan 2012 14:07:27 +0400 From: Eygene Ryabinkin To: Kostik Belousov Message-ID: <3YuyVoVrStnLgh/yaJvBsNy18nw@HbohoBmewgxm0atwUoKO7zhAAgw> References: <20120117202853.19F65DA81C@void.codelabs.ru> <20120118041439.GU31224@deviant.kiev.zoral.com.ua> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="4Ckj6UjgE2iN1+kY" Content-Disposition: inline In-Reply-To: <20120118041439.GU31224@deviant.kiev.zoral.com.ua> Sender: rea@codelabs.ru Cc: fs@freebsd.org Subject: Re: kern/164261: [patch] fix panic with NFS served from NULLFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2012 10:24:35 -0000 --4Ckj6UjgE2iN1+kY Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Konstantin, good day. Wed, Jan 18, 2012 at 06:14:39AM +0200, Kostik Belousov wrote: > On Wed, Jan 18, 2012 at 12:28:53AM +0400, Eygene Ryabinkin wrote: > > Patches > > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/0001-NULLFS-pro= perly-destroy-node-hash.patch > This one is probably fine, assuming that the hashes are properly cleared > on unmount. Feel free to commit. Will do, thanks! > > and > > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/0002-NULLFS-fix= -panics-when-lowervp-is-locked-with-LK_SHA.patch > > will fix the problem (in reality, the first patch is just some > > nitpicking). > And I do not even read this. >=20 > The issue that the backtrace is pointing to seems to be the misuse of vre= le(), > after the vnode lock is switched to null vnode v_lock. Since the vnode th= at > is being thrown out is exclusively locked, cleanup path shall do vput() > instead of vrele(). The short story: at this vrele, vp had returned to its own v_lock as v_vnlo= ck, so it is unlocked here. The long story with some thoughts and questions. If vput() path will end up in null_reclaim(), this seems to be unhelpful: - VOP_RECLAIM() expects exclusively locked vnode, since it was instructed to by vnode_if.src and thus vnode_if.c has ASSERT_VOP_ELOCKED(a->a_vp) in VOP_RECLAIM_APV(); for nullfs vop_islocked is vop_stdislocked() and it checks the lock status of v_vnlock, so anything that comes to null_re= claim will be exclusively locked with *v_vnlock; - null_reclaim() has the following code, {{{ struct vnode *vp =3D ap->a_vp; [...] lockmgr(&vp->v_lock, LK_EXCLUSIVE, NULL) [...]] }}} And when vp->v_lock is equal to *v_vnlock, this will lead to the lockmgr panic, because the thread tries to exclusively lock the object that was already locked by itself and has no recursion rights. If anyone sees flaws in this explanations, please, point me to them. I had recompiled the kernel with your vrele -> vput change inside null_reclaim and with DEBUG_VFS_LOCKS: it resulted in the lock violation =66rom insmntque, http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/DEBUG_VFS_LOCKS-pan= ic.txt So, it hadn't got to the point where null_reclaim() will come to the game. The problem is that insmntque1() wants the passed vnode to be exclusively locked, but nfsrvd_readdirplus() uses LK_SHARED. By the way, the ASSERT_VOP_ELOCKED() was introduced in r182364 by you: why do you insist on exclusive lock for MP-safe fs? The current interplay of NFS and NULLFS makes me think that either some of filesystems isn't real= ly MP-safe, or the requirement for exclusive locking can be relaxed. But to move on, I had removed ASSERT_VOP_ELOCKED() from insmntque1 and did the test once again. Another panic, now from vputx(), http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/DEBUG_VFS_LOCKS-no-= MPsafe-check-panic.txt and it hits a really good assertion: we had switched to the native lock (vp->v_lock) and it is not locked at all. So, I think that vrele() is the appropriate call for the race loser return path inside null_nodeget(). > Despite above, vrele(lowervp) call is be fine, despite lowervp also > being locked exclusively, because usecount for the vnode must be > 1, > due to null_hashins() successfully found the ovp in the hash. >=20 > Try the change below. > > diff --git a/sys/fs/nullfs/null_subr.c b/sys/fs/nullfs/null_subr.c > index 319e404..6ba7508 100644 > --- a/sys/fs/nullfs/null_subr.c > +++ b/sys/fs/nullfs/null_subr.c > @@ -252,7 +252,7 @@ null_nodeget(mp, lowervp, vpp) > vrele(lowervp); > vp->v_vnlock =3D &vp->v_lock; > xp->null_lowervp =3D NULL; > - vrele(vp); > + vput(vp); > return (0); > } > *vpp =3D vp; It will also need to remove {{{ panic("null_reclaim: reclaiming a node with no lowervp"); }}} =66rom null_reclaim(), since this function is not ready for null vnodes released by race losers. And my intention to add the code (the second patch) that will not pass such half-baked vnodes into existence was precisely to make null_reclaim to be simple and to add all internal complexity of getting new null vnodes to the null_nodeget() and null_subr.c. --=20 Eygene Ryabinkin ,,,^..^,,, [ Life's unfair - but root password helps! | codelabs.ru ] [ 82FE 06BC D497 C0DE 49EC 4FF0 16AF 9EAE 8152 ECFB | freebsd.org ] --4Ckj6UjgE2iN1+kY Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iF4EABEIAAYFAk8Wmd8ACgkQFq+eroFS7PtTFgD/eaU4z/EUV6XNH5/SzCXzt3h4 Bk3//7H6XToBFdpZoLcA/1JgJA3Ki0FKgIQaCTA93iYQnXFKkX8+OzP1p9ob2GC3 =I/5l -----END PGP SIGNATURE----- --4Ckj6UjgE2iN1+kY-- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 18 11:00:36 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2F3761065676 for ; Wed, 18 Jan 2012 11:00:36 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 19D6B8FC14 for ; Wed, 18 Jan 2012 11:00:36 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q0IB0Zkh094825 for ; Wed, 18 Jan 2012 11:00:35 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q0IB0Zn3094816; Wed, 18 Jan 2012 11:00:35 GMT (envelope-from gnats) Date: Wed, 18 Jan 2012 11:00:35 GMT Message-Id: <201201181100.q0IB0Zn3094816@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Eygene Ryabinkin Cc: Subject: Re: kern/164261: [patch] fix panic with NFS served from NULLFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Eygene Ryabinkin List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2012 11:00:36 -0000 The following reply was made to PR kern/164261; it has been noted by GNATS. From: Eygene Ryabinkin To: FreeBSD GNATS followup Cc: Subject: Re: kern/164261: [patch] fix panic with NFS served from NULLFS Date: Wed, 18 Jan 2012 14:36:04 +0400 --NDin8bjvE/0mNLFQ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable For the record, there is a discussion about this PR in freebsd-fs@, http://lists.freebsd.org/pipermail/freebsd-fs/2012-January/013438.html --=20 Eygene Ryabinkin ,,,^..^,,, [ Life's unfair - but root password helps! | codelabs.ru ] [ 82FE 06BC D497 C0DE 49EC 4FF0 16AF 9EAE 8152 ECFB | freebsd.org ] --NDin8bjvE/0mNLFQ Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iF4EABEIAAYFAk8WoJQACgkQFq+eroFS7Psw2wD9GnOwrSsAJOvKV5PYmXPzAeRs kxeLUdu0pN2RMgCiGe4A/Aj7b8LdTwQg2++lRs2exfmb6FFFKvjQ/x4E3jly9n2A =3Mep -----END PGP SIGNATURE----- --NDin8bjvE/0mNLFQ-- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 18 18:58:06 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1F3601065706; Wed, 18 Jan 2012 18:58:06 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id 7E6D78FC17; Wed, 18 Jan 2012 18:58:04 +0000 (UTC) Received: from skuns.kiev.zoral.com.ua (localhost [127.0.0.1]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id q0IIvxNo091473 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 18 Jan 2012 20:57:59 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5) with ESMTP id q0IIvxDi025865; Wed, 18 Jan 2012 20:57:59 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5/Submit) id q0IIvw5X025864; Wed, 18 Jan 2012 20:57:58 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Wed, 18 Jan 2012 20:57:58 +0200 From: Kostik Belousov To: Eygene Ryabinkin Message-ID: <20120118185758.GZ31224@deviant.kiev.zoral.com.ua> References: <20120117202853.19F65DA81C@void.codelabs.ru> <20120118041439.GU31224@deviant.kiev.zoral.com.ua> <3YuyVoVrStnLgh/yaJvBsNy18nw@HbohoBmewgxm0atwUoKO7zhAAgw> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="UE3U/UUWprt+cWGw" Content-Disposition: inline In-Reply-To: <3YuyVoVrStnLgh/yaJvBsNy18nw@HbohoBmewgxm0atwUoKO7zhAAgw> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: fs@freebsd.org Subject: Re: kern/164261: [patch] fix panic with NFS served from NULLFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2012 18:58:06 -0000 --UE3U/UUWprt+cWGw Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Jan 18, 2012 at 02:07:27PM +0400, Eygene Ryabinkin wrote: > Konstantin, good day. >=20 > Wed, Jan 18, 2012 at 06:14:39AM +0200, Kostik Belousov wrote: > > On Wed, Jan 18, 2012 at 12:28:53AM +0400, Eygene Ryabinkin wrote: > > > Patches > > > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/0001-NULLFS-p= roperly-destroy-node-hash.patch > > This one is probably fine, assuming that the hashes are properly cleared > > on unmount. Feel free to commit. >=20 > Will do, thanks! >=20 > > > and > > > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/0002-NULLFS-f= ix-panics-when-lowervp-is-locked-with-LK_SHA.patch > > > will fix the problem (in reality, the first patch is just some > > > nitpicking). > > And I do not even read this. > >=20 > > The issue that the backtrace is pointing to seems to be the misuse of v= rele(), > > after the vnode lock is switched to null vnode v_lock. Since the vnode = that > > is being thrown out is exclusively locked, cleanup path shall do vput() > > instead of vrele(). >=20 > The short story: at this vrele, vp had returned to its own v_lock as v_vn= lock, > so it is unlocked here. >=20 >=20 > The long story with some thoughts and questions. If vput() path will > end up in null_reclaim(), this seems to be unhelpful: >=20 > - VOP_RECLAIM() expects exclusively locked vnode, since it was instructed > to by vnode_if.src and thus vnode_if.c has ASSERT_VOP_ELOCKED(a->a_vp) > in VOP_RECLAIM_APV(); for nullfs vop_islocked is vop_stdislocked() and > it checks the lock status of v_vnlock, so anything that comes to null_= reclaim > will be exclusively locked with *v_vnlock; >=20 > - null_reclaim() has the following code, > {{{ > struct vnode *vp =3D ap->a_vp; > [...] > lockmgr(&vp->v_lock, LK_EXCLUSIVE, NULL) > [...]] > }}} > And when vp->v_lock is equal to *v_vnlock, this will lead to the > lockmgr panic, because the thread tries to exclusively lock the > object that was already locked by itself and has no recursion rights. >=20 > If anyone sees flaws in this explanations, please, point me to them. Ok, real flaw there is the attempt to treat half-constructed vnode as the fully-constructed one, later. It shall be decommissioned with the same code as does the insmntque1(). The complication is the fact that the vnode can be found on the mount list, but we only search for a vnode by hash. >=20 >=20 > I had recompiled the kernel with your vrele -> vput change inside > null_reclaim and with DEBUG_VFS_LOCKS: it resulted in the lock violation > from insmntque, > http://codelabs.ru/fbsd/prs/2012-jan-nullfs-LK_SHARED/DEBUG_VFS_LOCKS-p= anic.txt > So, it hadn't got to the point where null_reclaim() will come to the game. >=20 > The problem is that insmntque1() wants the passed vnode to be exclusively > locked, but nfsrvd_readdirplus() uses LK_SHARED. >=20 > By the way, the ASSERT_VOP_ELOCKED() was introduced in r182364 by you: > why do you insist on exclusive lock for MP-safe fs? The current interplay > of NFS and NULLFS makes me think that either some of filesystems isn't re= ally > MP-safe, or the requirement for exclusive locking can be relaxed. insmntque1() requires the exclusively locked vnode, because the function modifies the vnode (it inserts the vnode into mount list). nfsd is right there, nullfs is not. The filesystem shall ensure the proper locking if the requested mode is not strong enough. See how the UFS treats the lock flags if ffs_vgetf(): shared is only honored if the vnode is found in hash. So this is another bug, nullfs must switch to exclusive lock there. diff --git a/sys/fs/nullfs/null_subr.c b/sys/fs/nullfs/null_subr.c index 319e404..dd4ab61 100644 --- a/sys/fs/nullfs/null_subr.c +++ b/sys/fs/nullfs/null_subr.c @@ -169,17 +169,26 @@ null_hashins(mp, xp) } =20 static void -null_insmntque_dtr(struct vnode *vp, void *xp) +null_destroy_proto(struct vnode *vp, void *xp) { =20 - vput(((struct null_node *)xp)->null_lowervp); + VI_LOCK(vp); vp->v_data =3D NULL; vp->v_vnlock =3D &vp->v_lock; - free(xp, M_NULLFSNODE); vp->v_op =3D &dead_vnodeops; + VI_UNLOCK(vp); (void) vn_lock(vp, LK_EXCLUSIVE | LK_RETRY); vgone(vp); vput(vp); + free(xp, M_NULLFSNODE); +} + +static void +null_insmntque_dtr(struct vnode *vp, void *xp) +{ + + vput(((struct null_node *)xp)->null_lowervp); + null_destroy_proto(vp, xp); } =20 /* @@ -250,9 +259,7 @@ null_nodeget(mp, lowervp, vpp) *vpp =3D null_hashins(mp, xp); if (*vpp !=3D NULL) { vrele(lowervp); - vp->v_vnlock =3D &vp->v_lock; - xp->null_lowervp =3D NULL; - vrele(vp); + null_destroy_proto(vp, xp); return (0); } *vpp =3D vp; diff --git a/sys/fs/nullfs/null_vfsops.c b/sys/fs/nullfs/null_vfsops.c index cf3176f..d39926f 100644 --- a/sys/fs/nullfs/null_vfsops.c +++ b/sys/fs/nullfs/null_vfsops.c @@ -307,6 +307,12 @@ nullfs_vget(mp, ino, flags, vpp) struct vnode **vpp; { int error; + + KASSERT((flags & LK_TYPE_MASK) !=3D 0, + ("nullfs_vget: no lock requested")); + flags &=3D ~LK_TYPE_MASK; + flags |=3D LK_EXCLUSIVE; + error =3D VFS_VGET(MOUNTTONULLMOUNT(mp)->nullm_vfs, ino, flags, vpp); if (error) return (error); diff --git a/sys/fs/nullfs/null_vnops.c b/sys/fs/nullfs/null_vnops.c index e0645bd..b607666 100644 --- a/sys/fs/nullfs/null_vnops.c +++ b/sys/fs/nullfs/null_vnops.c @@ -697,12 +697,18 @@ null_inactive(struct vop_inactive_args *ap) static int null_reclaim(struct vop_reclaim_args *ap) { - struct vnode *vp =3D ap->a_vp; - struct null_node *xp =3D VTONULL(vp); - struct vnode *lowervp =3D xp->null_lowervp; + struct vnode *vp; + struct null_node *xp; + struct vnode *lowervp; + + vp =3D ap->a_vp; + xp =3D VTONULL(vp); + lowervp =3D xp->null_lowervp; + + KASSERT(lowervp !=3D NULL && vp->v_vnlock !=3D &vp->v_lock, + ("Reclaiming inclomplete null vnode %p", vp)); =20 - if (lowervp) - null_hashrem(xp); + null_hashrem(xp); /* * Use the interlock to protect the clearing of v_data to * prevent faults in null_lock(). @@ -713,10 +719,7 @@ null_reclaim(struct vop_reclaim_args *ap) vp->v_object =3D NULL; vp->v_vnlock =3D &vp->v_lock; VI_UNLOCK(vp); - if (lowervp) - vput(lowervp); - else - panic("null_reclaim: reclaiming a node with no lowervp"); + vput(lowervp); free(xp, M_NULLFSNODE); =20 return (0); --UE3U/UUWprt+cWGw Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEARECAAYFAk8XFjYACgkQC3+MBN1Mb4hZ0QCgomQ64o9FleVDK778Mb4MA3dz LgwAnjFKq0NM2K8tqujf4zIlUMgxpxnU =nbAC -----END PGP SIGNATURE----- --UE3U/UUWprt+cWGw-- From owner-freebsd-fs@FreeBSD.ORG Wed Jan 18 22:07:26 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 02CA4106566B; Wed, 18 Jan 2012 22:07:26 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id BB7338FC08; Wed, 18 Jan 2012 22:07:25 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [96.47.65.170]) by cyrus.watson.org (Postfix) with ESMTPSA id 3D06746B2C; Wed, 18 Jan 2012 17:07:25 -0500 (EST) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 9B0C1B91C; Wed, 18 Jan 2012 17:07:22 -0500 (EST) From: John Baldwin To: Rick Macklem Date: Wed, 18 Jan 2012 17:07:21 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p10; KDE/4.5.5; amd64; ; ) MIME-Version: 1.0 Content-Type: Text/Plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Message-Id: <201201181707.21293.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Wed, 18 Jan 2012 17:07:22 -0500 (EST) Cc: Peter Wemm , fs@freebsd.org Subject: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2012 22:07:26 -0000 I recently encountered a problem at work with a very stale name cache entry. A directory was renamed on one NFS client and a new directory was created with the same name. On another NFS client, both the old and new pathnames resolved to the old filehandle and stayed that way for days. It was only fixed by touching the parent directory which forced the "wrong" NFS client to flush name cache entries for the directory and repopulate it via LOOKUPs. I eventually figured out the race condition that triggered this and was able to reproduce it. (I had to hack up the NFS client to do some explicit sleeps to order the steps right to trigger the race however. It seems to be very rare in practice.) The root cause for the stale entry being trusted is that each per-vnode nfsnode structure has a single 'n_ctime' timestamp used to validate positive name cache entries. However, if there are multiple entries for a single vnode, they were all sharing a single timestamp. Assume you have three threads spread across two NFS clients (R1 on the client doing the directory rename, and T1 and T2 on the "victim" NFS client), and assume that thread S1 represents the NFS server and the order it completes requests. Also, assume that $D represents a parent directory where the rename occurs and that the original directory is named "foo". Finally, assume that F1 is the original directory's filehandle, and F2 is the new filehandle. Time increases as the graph goes down: R1 T1 T2 S1 ------------- ------------- ------------- --------------- LOOKUP "$D/foo" (1) REPLY (1) "foo" F1 start reply processing up to extracting post-op attrs RENAME "$D/foo" "$D/baz" (2) REPLY (2) GETATTR $D during lookup due to expiry (3) REPLY (3) flush $D name cache entries due to updated timestamp LOOKUP "$D/baz" (4) REPLY (4) "baz" F1 process reply, including post-op attrs that set F1's cached attrs to a ctime post RENAME resume reply finish reply processing processing including including setting F1's setting F1's n_ctime and n_ctime and adding cache adding cache entry entry At the end of this, the "victim" NFS client now has two name cache entries for "$D/foo" and "$D/baz" that point to the F1 filehandle. The n_ctime used to validate these name cache hits in nfs_lookup() is already updated to post RENAME, so nfs_lookup() will trust these entries until a future change to F1's i-node. Further, "$D"'s local attribute cache already reflects the updated ctime post RENAME, so it will not flush it's name cache entries until a future change to the directory. The root problem is that the name cache entry for "foo" was added using the wrong ctime. It really should be using the F1 attributes in the post-op attributes from the LOOKUP reply, not from F1's local attribute cache. However, just changing that is not sufficient. There are still races with the calls to cache_enter() and updating n_ctime. What I concluded is that it would really be far simpler and more obvious if the cached timestamps were stored in the namecache entry directly rather than having multiple name cache entries validated by shared state in the nfsnode. This does mean allowing the name cache to hold some filesystem-specific state. However, I felt this was much cleaner than adding a lot more complexity to nfs_lookup(). Also, this turns out to be fairly non-invasive to implement since nfs_lookup() calls cache_lookup() directly, but other filesystems only call it indirectly via vfs_cache_lookup(). I considered letting filesystems store a void * cookie in the name cache entry and having them provide a destructor, etc. However, that would require extra allocations for NFS lookups. Instead, I just adjusted the name cache API to explicitly allow the filesystem to store a single timestamp in a name cache entry by adding a new 'cache_enter_time()' that accepts a struct timespec that is copied into the entry. 'cache_enter_time()' also saves the current value of 'ticks' in the entry. 'cache_lookup()' is modified to add two new arguments used to return the timespec and ticks value used for a namecache entry when a hit in the cache occurs. One wrinkle with this is that the name cache does not create actual entries for ".", and thus it would not store any timestamps for those lookups. To fix this I changed the NFS client to explicitly fast-path lookups of "." by always returning the current directory as setup by cache_lookup() and never bothering to do a LOOKUP or check for stale attributes in that case. The current patch against 8 is at http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch It includes ABI and API compat shims so that it is suitable for merging to stable branches. For HEAD I would likely retire the cache_lookup_times() name and just change all the callers of cache_lookup() (there are only a handful, and nwfs and smbfs might benefit from this functionality anyway). -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Wed Jan 18 23:52:59 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0670A106566C; Wed, 18 Jan 2012 23:52:59 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 5ECEE8FC08; Wed, 18 Jan 2012 23:52:58 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAARaF0+DaFvO/2dsb2JhbABEFoRup0SCB4FyAQEFI1YbDgoCAg0ZAlkGHId5pzaRa4EviAkBAQgJFAkBAQECAQEMBQQRBQEGAQEGAQUXFQECAQEIAQEBCQYCBgEDAQEEAgEBAwEOBAEDAgIDBA0BAQIBBAIBAgEBBQUEAgEDAQQBBQICAQECAQEBBQYBAQEHAQECBgICAgEEAggDgUAaAgcBAQIDDQECAwEBAwIDAgMEAQSCMYEWBIg7jFmSZg X-IronPort-AV: E=Sophos;i="4.71,532,1320642000"; d="scan'208";a="152736736" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 18 Jan 2012 18:52:57 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 41BB3B3F24; Wed, 18 Jan 2012 18:52:57 -0500 (EST) Date: Wed, 18 Jan 2012 18:52:57 -0500 (EST) From: Rick Macklem To: John Baldwin Message-ID: <1143916684.516944.1326930777192.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <201201181707.21293.jhb@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2012 23:52:59 -0000 John Baldwin wrote: > I recently encountered a problem at work with a very stale name cache > entry. A directory was renamed on one NFS client and a new directory > was created with the same name. On another NFS client, both the old > and new pathnames resolved to the old filehandle and stayed that way > for days. It was only fixed by touching the parent directory which > forced the "wrong" NFS client to flush name cache entries for the > directory and repopulate it via LOOKUPs. I eventually figured out the > race condition that triggered this and was able to reproduce it. (I > had to hack up the NFS client to do some explicit sleeps to order the > steps right to trigger the race however. It seems to be very rare in > practice.) The root cause for the stale entry being trusted is that > each per-vnode nfsnode structure has a single 'n_ctime' timestamp used > to validate positive name cache entries. However, if there are > multiple entries for a single vnode, they were all sharing a single > timestamp. Assume you have three threads spread across two NFS > clients (R1 on the client doing the directory rename, and T1 and T2 on > the "victim" NFS client), and assume that thread S1 represents the NFS > server and the order it completes requests. Also, assume that $D > represents a parent directory where the rename occurs and that the > original directory is named "foo". Finally, assume that F1 is the > original directory's filehandle, and F2 is the new filehandle. > Time increases as the graph goes down: > > R1 T1 T2 S1 > ------------- ------------- ------------- --------------- > LOOKUP "$D/foo" > (1) > REPLY (1) "foo" F1 > start reply > processing > up to > extracting > post-op attrs > RENAME "$D/foo" > "$D/baz" (2) > REPLY (2) > GETATTR $D > during lookup > due to expiry > (3) > REPLY (3) > flush $D name > cache entries > due to updated > timestamp > LOOKUP "$D/baz" > (4) > REPLY (4) "baz" F1 > process reply, > including > post-op attrs > that set F1's > cached attrs > to a ctime > post RENAME > > resume reply finish reply > processing processing > including including > setting F1's setting F1's > n_ctime and n_ctime and > adding cache adding cache > entry entry > > At the end of this, the "victim" NFS client now has two name cache > entries for "$D/foo" and "$D/baz" that point to the F1 filehandle. > The n_ctime used to validate these name cache hits in nfs_lookup() is > already updated to post RENAME, so nfs_lookup() will trust these > entries until a future change to F1's i-node. Further, "$D"'s local > attribute cache already reflects the updated ctime post RENAME, so it > will not flush it's name cache entries until a future change to the > directory. > > The root problem is that the name cache entry for "foo" was added > using the wrong ctime. It really should be using the F1 attributes in > the post-op attributes from the LOOKUP reply, not from F1's local > attribute cache. However, just changing that is not sufficient. > There are still races with the calls to cache_enter() and updating > n_ctime. > > What I concluded is that it would really be far simpler and more > obvious if the cached timestamps were stored in the namecache entry > directly rather than having multiple name cache entries validated by > shared state in the nfsnode. This does mean allowing the name cache > to hold some filesystem-specific state. However, I felt this was much > cleaner than adding a lot more complexity to nfs_lookup(). Also, this > turns out to be fairly non-invasive to implement since nfs_lookup() > calls cache_lookup() directly, but other filesystems only call it > indirectly via vfs_cache_lookup(). I considered letting filesystems > store a void * cookie in the name cache entry and having them provide > a destructor, etc. However, that would require extra allocations for > NFS lookups. Instead, I just adjusted the name cache API to > explicitly allow the filesystem to store a single timestamp in a name > cache entry by adding a new 'cache_enter_time()' that accepts a struct > timespec that is copied into the entry. 'cache_enter_time()' also > saves the current value of 'ticks' in the entry. 'cache_lookup()' is > modified to add two new arguments used to return the timespec and > ticks value used for a namecache entry when a hit in the cache occurs. > > One wrinkle with this is that the name cache does not create actual > entries for ".", and thus it would not store any timestamps for those > lookups. To fix this I changed the NFS client to explicitly fast-path > lookups of "." by always returning the current directory as setup by > cache_lookup() and never bothering to do a LOOKUP or check for stale > attributes in that case. > > The current patch against 8 is at > http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch > > It includes ABI and API compat shims so that it is suitable for > merging to stable branches. For HEAD I would likely retire the > cache_lookup_times() name and just change all the callers of > cache_lookup() (there are only a handful, and nwfs and smbfs might > benefit from this functionality anyway). > It sounds good to me, although I haven`t yet looked at the patch or thought about it much. However, (and I think you`re already aware of this) given time clock resolution etc, as soon as multiple clients start manipulating the contents of a directory concurrently there is going to be a possibility of having a stale name cache entry. I think you`ve already mentioned this, but having a timeout on positive name cache entries like we did for negative name cache entries, will at least limit the effect of these. For negative name cache entries, the little test I did showed that name cache hit was almost as good for a 30-60sec timeout as an infinite timeout. I suspect something similar might be true for positive name cache entries and it will be easy to do some measurements once it is coded. If you would like, I can code up a positive name cache timeout similar to what you did for the negative name cache entries or would you prefer to do so? rick From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 14:06:25 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 332631065674; Thu, 19 Jan 2012 14:06:25 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id BB3F98FC1C; Thu, 19 Jan 2012 14:06:24 +0000 (UTC) Received: from skuns.kiev.zoral.com.ua (localhost [127.0.0.1]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id q0JE6GGZ029500 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 19 Jan 2012 16:06:16 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5) with ESMTP id q0JE6F2w078052; Thu, 19 Jan 2012 16:06:15 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5/Submit) id q0JE6Fkn078051; Thu, 19 Jan 2012 16:06:15 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Thu, 19 Jan 2012 16:06:13 +0200 From: Kostik Belousov To: John Baldwin Message-ID: <20120119140613.GD31224@deviant.kiev.zoral.com.ua> References: <201201181707.21293.jhb@freebsd.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="VetWPALTXTOCTxkp" Content-Disposition: inline In-Reply-To: <201201181707.21293.jhb@freebsd.org> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 14:06:25 -0000 --VetWPALTXTOCTxkp Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Jan 18, 2012 at 05:07:21PM -0500, John Baldwin wrote: =2E.. > What I concluded is that it would really be far simpler and more > obvious if the cached timestamps were stored in the namecache entry > directly rather than having multiple name cache entries validated by > shared state in the nfsnode. This does mean allowing the name cache > to hold some filesystem-specific state. However, I felt this was much > cleaner than adding a lot more complexity to nfs_lookup(). Also, this > turns out to be fairly non-invasive to implement since nfs_lookup() > calls cache_lookup() directly, but other filesystems only call it > indirectly via vfs_cache_lookup(). I considered letting filesystems > store a void * cookie in the name cache entry and having them provide > a destructor, etc. However, that would require extra allocations for > NFS lookups. Instead, I just adjusted the name cache API to > explicitly allow the filesystem to store a single timestamp in a name > cache entry by adding a new 'cache_enter_time()' that accepts a struct > timespec that is copied into the entry. 'cache_enter_time()' also > saves the current value of 'ticks' in the entry. 'cache_lookup()' is > modified to add two new arguments used to return the timespec and > ticks value used for a namecache entry when a hit in the cache occurs. >=20 > One wrinkle with this is that the name cache does not create actual > entries for ".", and thus it would not store any timestamps for those > lookups. To fix this I changed the NFS client to explicitly fast-path > lookups of "." by always returning the current directory as setup by > cache_lookup() and never bothering to do a LOOKUP or check for stale > attributes in that case. >=20 > The current patch against 8 is at > http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch =2E.. So now you add 8*2+4 bytes to each namecache entry on amd64 unconditionally. Current size of the struct namecache invariant part on amd64 is 72 bytes, so addition of 20 bytes looks slightly excessive. I am not sure about typical distribution of the namecache nc_name length, so it is unobvious does the change changes the memory usage significantly. A flag could be added to nc_flags to indicate the presence of timestamp. The timestamps would be conditionally placed after nc_nlen, we probably could use union to ease the access. Then, the direct dereferences of nc_name would need to be converted to some inline function. I can do this after your patch is committed, if you consider the memory usage saving worth it. --VetWPALTXTOCTxkp Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEARECAAYFAk8YI1UACgkQC3+MBN1Mb4gaBgCeM1EsgbmWanasw8Mk4UO03o6J oikAnikR7N6x4S9ePHlDOrYNc0u2ihqc =7esr -----END PGP SIGNATURE----- --VetWPALTXTOCTxkp-- From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 15:46:55 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 49292106566B for ; Thu, 19 Jan 2012 15:46:55 +0000 (UTC) (envelope-from martin.ranne@kockumsonics.com) Received: from webmail.kockumsonics.com (mail.kockumsonics.com [194.103.55.3]) by mx1.freebsd.org (Postfix) with ESMTP id 07F608FC16 for ; Thu, 19 Jan 2012 15:46:53 +0000 (UTC) Received: from MAILGATE.sonet.local ([192.168.12.8]) by mailgate ([192.168.12.8]) with mapi id 14.01.0355.002; Thu, 19 Jan 2012 16:36:21 +0100 From: Martin Ranne To: "freebsd-fs@freebsd.org" Thread-Topic: zpool import reboots computer Thread-Index: AczWvHf/qf1tgj/cQ3aTdT164KORYw== Date: Thu, 19 Jan 2012 15:36:20 +0000 Message-ID: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> Accept-Language: sv-SE, en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [192.168.15.18] Content-Type: multipart/mixed; boundary="_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_" MIME-Version: 1.0 Subject: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 15:46:55 -0000 --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable I had a failure in one server where i try to determine if it is memory or c= pu. It shows up as memory failure in memtest86. The result is that it manag= ed to damage the zpool which is a raidz2 with 6 disks. If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool -f = -R /mnt/zroot zroot it will reboot the computer. I have also tried to impor= t it in another computer which is running 9-STABLE with the same result. On= the second computer I used zpool -f -R /mnt/zroot "zpool-id" serv06zroot=20 Can I get some help on how to be able to debug this and in the end be able = to import it to repair it. Data for the second computer can be found attached. The disks in question a= re da0 to da5 in this. Best Regards, Martin Ranne --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="dmesg.out" Content-Description: dmesg.out Content-Disposition: attachment; filename="dmesg.out"; size=17066; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:25:19 GMT" Content-Transfer-Encoding: base64 Q29weXJpZ2h0IChjKSAxOTkyLTIwMTIgVGhlIEZyZWVCU0QgUHJvamVjdC4KQ29weXJpZ2h0IChj KSAxOTc5LCAxOTgwLCAxOTgzLCAxOTg2LCAxOTg4LCAxOTg5LCAxOTkxLCAxOTkyLCAxOTkzLCAx OTk0CglUaGUgUmVnZW50cyBvZiB0aGUgVW5pdmVyc2l0eSBvZiBDYWxpZm9ybmlhLiBBbGwgcmln aHRzIHJlc2VydmVkLgpGcmVlQlNEIGlzIGEgcmVnaXN0ZXJlZCB0cmFkZW1hcmsgb2YgVGhlIEZy ZWVCU0QgRm91bmRhdGlvbi4KRnJlZUJTRCA5LjAtU1RBQkxFICMxNTogVGh1IEphbiAxOSAxNTo0 OTo1NiBDRVQgMjAxMgogICAgcm9vdEBTdG9MYXQuc29uZXQubG9jYWw6L3Vzci9vYmovdXNyL3Ny Yy9zeXMvR0VORVJJQyBhbWQ2NApDUFU6IEFNRCBQaGVub20odG0pIElJIFg2IDExMDBUIFByb2Nl c3NvciAoMzcxMi40OS1NSHogSzgtY2xhc3MgQ1BVKQogIE9yaWdpbiA9ICJBdXRoZW50aWNBTUQi ICBJZCA9IDB4MTAwZmEwICBGYW1pbHkgPSAxMCAgTW9kZWwgPSBhICBTdGVwcGluZyA9IDAKICBG ZWF0dXJlcz0weDE3OGJmYmZmPEZQVSxWTUUsREUsUFNFLFRTQyxNU1IsUEFFLE1DRSxDWDgsQVBJ QyxTRVAsTVRSUixQR0UsTUNBLENNT1YsUEFULFBTRTM2LENMRkxVU0gsTU1YLEZYU1IsU1NFLFNT RTIsSFRUPgogIEZlYXR1cmVzMj0weDgwMjAwOTxTU0UzLE1PTixDWDE2LFBPUENOVD4KICBBTUQg RmVhdHVyZXM9MHhlZTUwMDgwMDxTWVNDQUxMLE5YLE1NWCssRkZYU1IsUGFnZTFHQixSRFRTQ1As TE0sM0ROb3chKywzRE5vdyE+CiAgQU1EIEZlYXR1cmVzMj0weDgzN2ZmPExBSEYsQ01QLFNWTSxF eHRBUElDLENSOCxBQk0sU1NFNEEsTUFTLFByZWZldGNoLE9TVlcsSUJTLFNLSU5JVCxXRFQsTm9k ZUlkPgogIFRTQzogUC1zdGF0ZSBpbnZhcmlhbnQsIHBlcmZvcm1hbmNlIHN0YXRpc3RpY3MKcmVh bCBtZW1vcnkgID0gMTcxNzk4NjkxODQgKDE2Mzg0IE1CKQphdmFpbCBtZW1vcnkgPSAxNjQ2NTY3 ODMzNiAoMTU3MDIgTUIpCkV2ZW50IHRpbWVyICJMQVBJQyIgcXVhbGl0eSA0MDAKQUNQSSBBUElD IFRhYmxlOiA8QUxBU0tBIEEgTSBJPgpGcmVlQlNEL1NNUDogTXVsdGlwcm9jZXNzb3IgU3lzdGVt IERldGVjdGVkOiA2IENQVXMKRnJlZUJTRC9TTVA6IDEgcGFja2FnZShzKSB4IDYgY29yZShzKQog Y3B1MCAoQlNQKTogQVBJQyBJRDogIDAKIGNwdTEgKEFQKTogQVBJQyBJRDogIDEKIGNwdTIgKEFQ KTogQVBJQyBJRDogIDIKIGNwdTMgKEFQKTogQVBJQyBJRDogIDMKIGNwdTQgKEFQKTogQVBJQyBJ RDogIDQKIGNwdTUgKEFQKTogQVBJQyBJRDogIDUKQUNQSSBXYXJuaW5nOiBPcHRpb25hbCBmaWVs ZCBQbTJDb250cm9sQmxvY2sgaGFzIHplcm8gYWRkcmVzcyBvciBsZW5ndGg6IDB4MDAwMDAwMDAw MDAwMDAwMC8weDEgKDIwMTEwNTI3L3RiZmFkdC01ODYpCmlvYXBpYzAgPFZlcnNpb24gMi4xPiBp cnFzIDAtMjMgb24gbW90aGVyYm9hcmQKaW9hcGljMSA8VmVyc2lvbiAyLjE+IGlycXMgMjQtNTUg b24gbW90aGVyYm9hcmQKa2JkMSBhdCBrYmRtdXgwCmFjcGkwOiA8QUxBU0tBIEEgTSBJPiBvbiBt b3RoZXJib2FyZApBQ1BJIEVycm9yOiBbUkFNQl0gTmFtZXNwYWNlIGxvb2t1cCBmYWlsdXJlLCBB RV9OT1RfRk9VTkQgKDIwMTEwNTI3L3BzYXJncy0zOTIpCkFDUEkgRXhjZXB0aW9uOiBBRV9OT1Rf Rk9VTkQsIENvdWxkIG5vdCBleGVjdXRlIGFyZ3VtZW50cyBmb3IgW1JBTVddIChSZWdpb24pICgy MDExMDUyNy9uc2luaXQtMzgwKQphY3BpMDogUG93ZXIgQnV0dG9uIChmaXhlZCkKVGltZWNvdW50 ZXIgIkFDUEktZmFzdCIgZnJlcXVlbmN5IDM1Nzk1NDUgSHogcXVhbGl0eSA5MDAKYWNwaV90aW1l cjA6IDwzMi1iaXQgdGltZXIgYXQgMy41Nzk1NDVNSHo+IHBvcnQgMHg4MDgtMHg4MGIgb24gYWNw aTAKY3B1MDogPEFDUEkgQ1BVPiBvbiBhY3BpMApjcHUxOiA8QUNQSSBDUFU+IG9uIGFjcGkwCmNw dTI6IDxBQ1BJIENQVT4gb24gYWNwaTAKY3B1MzogPEFDUEkgQ1BVPiBvbiBhY3BpMApjcHU0OiA8 QUNQSSBDUFU+IG9uIGFjcGkwCmNwdTU6IDxBQ1BJIENQVT4gb24gYWNwaTAKYWNwaV9lYzA6IDxF bWJlZGRlZCBDb250cm9sbGVyOiBHUEUgMHhhPiBwb3J0IDB4NjIsMHg2NiBvbiBhY3BpMApocGV0 MDogPEhpZ2ggUHJlY2lzaW9uIEV2ZW50IFRpbWVyPiBpb21lbSAweGZlZDAwMDAwLTB4ZmVkMDAz ZmYgb24gYWNwaTAKVGltZWNvdW50ZXIgIkhQRVQiIGZyZXF1ZW5jeSAxNDMxODE4MCBIeiBxdWFs aXR5IDk1MApFdmVudCB0aW1lciAiSFBFVCIgZnJlcXVlbmN5IDE0MzE4MTgwIEh6IHF1YWxpdHkg NDUwCkV2ZW50IHRpbWVyICJIUEVUMSIgZnJlcXVlbmN5IDE0MzE4MTgwIEh6IHF1YWxpdHkgNDUw CkV2ZW50IHRpbWVyICJIUEVUMiIgZnJlcXVlbmN5IDE0MzE4MTgwIEh6IHF1YWxpdHkgNDUwCnBj aWIwOiA8QUNQSSBIb3N0LVBDSSBicmlkZ2U+IHBvcnQgMHhjZjgtMHhjZmYgb24gYWNwaTAKcGNp MDogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjAKcGNpMDogPGJhc2UgcGVyaXBoZXJhbD4gYXQgZGV2 aWNlIDAuMiAobm8gZHJpdmVyIGF0dGFjaGVkKQpwY2liMTogPEFDUEkgUENJLVBDSSBicmlkZ2U+ IGlycSA1MiBhdCBkZXZpY2UgMi4wIG9uIHBjaTAKcGNpMTogPEFDUEkgUENJIGJ1cz4gb24gcGNp YjEKdmdhcGNpMDogPFZHQS1jb21wYXRpYmxlIGRpc3BsYXk+IHBvcnQgMHhlMDAwLTB4ZTA3ZiBt ZW0gMHhmNDAwMDAwMC0weGY1ZmZmZmZmLDB4ZDAwMDAwMDAtMHhkN2ZmZmZmZiwweGQ4MDAwMDAw LTB4ZGJmZmZmZmYgaXJxIDI0IGF0IGRldmljZSAwLjAgb24gcGNpMQpudmlkaWEwOiA8R2VGb3Jj ZSBHVFggNTYwPiBvbiB2Z2FwY2kwCnZnYXBjaTA6IGNoaWxkIG52aWRpYTAgcmVxdWVzdGVkIHBj aV9lbmFibGVfaW8KdmdhcGNpMDogY2hpbGQgbnZpZGlhMCByZXF1ZXN0ZWQgcGNpX2VuYWJsZV9p bwpoZGFjMDogPE5WaWRpYSAoVW5rbm93bikgSGlnaCBEZWZpbml0aW9uIEF1ZGlvIENvbnRyb2xs ZXI+IG1lbSAweGY2MDgwMDAwLTB4ZjYwODNmZmYgaXJxIDI1IGF0IGRldmljZSAwLjEgb24gcGNp MQpwY2liMjogPEFDUEkgUENJLVBDSSBicmlkZ2U+IGlycSA1MiBhdCBkZXZpY2UgNC4wIG9uIHBj aTAKcGNpMjogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjIKYWhjaTA6IDxBU01lZGlhIEFTTTEwNjEg QUhDSSBTQVRBIGNvbnRyb2xsZXI+IHBvcnQgMHhkMDUwLTB4ZDA1NywweGQwNDAtMHhkMDQzLDB4 ZDAzMC0weGQwMzcsMHhkMDIwLTB4ZDAyMywweGQwMDAtMHhkMDFmIG1lbSAweGY2ZTAwMDAwLTB4 ZjZlMDAxZmYgaXJxIDQ0IGF0IGRldmljZSAwLjAgb24gcGNpMgphaGNpMDogQUhDSSB2MS4yMCB3 aXRoIDIgNkdicHMgcG9ydHMsIFBvcnQgTXVsdGlwbGllciBzdXBwb3J0ZWQKYWhjaWNoMDogPEFI Q0kgY2hhbm5lbD4gYXQgY2hhbm5lbCAwIG9uIGFoY2kwCmFoY2ljaDE6IDxBSENJIGNoYW5uZWw+ IGF0IGNoYW5uZWwgMSBvbiBhaGNpMApwY2liMzogPEFDUEkgUENJLVBDSSBicmlkZ2U+IGlycSA1 MiBhdCBkZXZpY2UgNS4wIG9uIHBjaTAKcGNpMzogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjMKZW0w OiA8SW50ZWwoUikgUFJPLzEwMDAgTmV0d29yayBDb25uZWN0aW9uIDcuMi4zPiBwb3J0IDB4YzAw MC0weGMwMWYgbWVtIDB4ZjZkMDAwMDAtMHhmNmQxZmZmZiwweGY2ZDIwMDAwLTB4ZjZkMjNmZmYg aXJxIDQ2IGF0IGRldmljZSAwLjAgb24gcGNpMwplbTA6IFVzaW5nIGFuIE1TSSBpbnRlcnJ1cHQK ZW0wOiBFdGhlcm5ldCBhZGRyZXNzOiBmNDo2ZDowNDo1NDplYzo4ZQpwY2liNDogPEFDUEkgUENJ LVBDSSBicmlkZ2U+IGlycSA1MyBhdCBkZXZpY2UgNi4wIG9uIHBjaTAKcGNpNDogPEFDUEkgUENJ IGJ1cz4gb24gcGNpYjQKeGhjaTA6IDxYSENJIChnZW5lcmljKSBVU0IgMy4wIGNvbnRyb2xsZXI+ IG1lbSAweGY2YzAwMDAwLTB4ZjZjMDdmZmYgaXJxIDUxIGF0IGRldmljZSAwLjAgb24gcGNpNAp4 aGNpMDogMzIgYnl0ZSBjb250ZXh0IHNpemUuCnVzYnVzMCBvbiB4aGNpMApwY2liNTogPEFDUEkg UENJLVBDSSBicmlkZ2U+IGlycSA1MyBhdCBkZXZpY2UgNy4wIG9uIHBjaTAKcGNpNTogPEFDUEkg UENJIGJ1cz4gb24gcGNpYjUKeGhjaTE6IDxYSENJIChnZW5lcmljKSBVU0IgMy4wIGNvbnRyb2xs ZXI+IG1lbSAweGY2YjAwMDAwLTB4ZjZiMDdmZmYgaXJxIDUwIGF0IGRldmljZSAwLjAgb24gcGNp NQp4aGNpMTogMzIgYnl0ZSBjb250ZXh0IHNpemUuCnVzYnVzMSBvbiB4aGNpMQpwY2liNjogPEFD UEkgUENJLVBDSSBicmlkZ2U+IGlycSA1MyBhdCBkZXZpY2UgOS4wIG9uIHBjaTAKcGNpNjogPEFD UEkgUENJIGJ1cz4gb24gcGNpYjYKeGhjaTI6IDxYSENJIChnZW5lcmljKSBVU0IgMy4wIGNvbnRy b2xsZXI+IG1lbSAweGY2YTAwMDAwLTB4ZjZhMDdmZmYgaXJxIDQ4IGF0IGRldmljZSAwLjAgb24g cGNpNgp4aGNpMjogMzIgYnl0ZSBjb250ZXh0IHNpemUuCnVzYnVzMiBvbiB4aGNpMgpwY2liNzog PEFDUEkgUENJLVBDSSBicmlkZ2U+IGlycSA1NCBhdCBkZXZpY2UgMTEuMCBvbiBwY2kwCnBjaTc6 IDxBQ1BJIFBDSSBidXM+IG9uIHBjaWI3CnB1YzA6IDxPeGZvcmQgU2VtaWNvbmR1Y3RvciBPWFBD SWU5NTIgVUFSVHM+IG1lbSAweGY2NjAwMDAwLTB4ZjY2MDNmZmYsMHhmNjQwMDAwMC0weGY2NWZm ZmZmLDB4ZjYyMDAwMDAtMHhmNjNmZmZmZiBpcnEgMzIgYXQgZGV2aWNlIDAuMCBvbiBwY2k3CnB1 YzA6IDIgVUFSVHMgZGV0ZWN0ZWQKdWFydDA6IDwxNjk1MCBvciBjb21wYXRpYmxlPiBhdCBwb3J0 IDEgb24gcHVjMAp1YXJ0MTogPDE2OTUwIG9yIGNvbXBhdGlibGU+IGF0IHBvcnQgMiBvbiBwdWMw CnBjaWI4OiA8QUNQSSBQQ0ktUENJIGJyaWRnZT4gaXJxIDU0IGF0IGRldmljZSAxMi4wIG9uIHBj aTAKcGNpODogPEFDUEkgUENJIGJ1cz4gb24gcGNpYjgKbXBzMDogPExTSSBTQVMyMDA4PiBwb3J0 IDB4YjAwMC0weGIwZmYgbWVtIDB4ZjY5YzAwMDAtMHhmNjljM2ZmZiwweGY2OTgwMDAwLTB4ZjY5 YmZmZmYgaXJxIDM2IGF0IGRldmljZSAwLjAgb24gcGNpOAptcHMwOiBGaXJtd2FyZTogMDcuMDAu MDAuMDAKbXBzMDogSU9DQ2FwYWJpbGl0aWVzOiAxODVjPFNjc2lUYXNrRnVsbCxEaWFnVHJhY2Us U25hcEJ1ZixFRURQLFRyYW5zUmV0cnksSVI+CmFoY2kxOiA8QVRJIElYUDcwMCBBSENJIFNBVEEg Y29udHJvbGxlcj4gcG9ydCAweGYwNDAtMHhmMDQ3LDB4ZjAzMC0weGYwMzMsMHhmMDIwLTB4ZjAy NywweGYwMTAtMHhmMDEzLDB4ZjAwMC0weGYwMGYgbWVtIDB4ZjYxMGIwMDAtMHhmNjEwYjNmZiBp cnEgMTkgYXQgZGV2aWNlIDE3LjAgb24gcGNpMAphaGNpMTogQUhDSSB2MS4yMCB3aXRoIDYgNkdi cHMgcG9ydHMsIFBvcnQgTXVsdGlwbGllciBzdXBwb3J0ZWQKYWhjaWNoMjogPEFIQ0kgY2hhbm5l bD4gYXQgY2hhbm5lbCAwIG9uIGFoY2kxCmFoY2ljaDM6IDxBSENJIGNoYW5uZWw+IGF0IGNoYW5u ZWwgMSBvbiBhaGNpMQphaGNpY2g0OiA8QUhDSSBjaGFubmVsPiBhdCBjaGFubmVsIDIgb24gYWhj aTEKYWhjaWNoNTogPEFIQ0kgY2hhbm5lbD4gYXQgY2hhbm5lbCAzIG9uIGFoY2kxCmFoY2ljaDY6 IDxBSENJIGNoYW5uZWw+IGF0IGNoYW5uZWwgNCBvbiBhaGNpMQphaGNpY2g3OiA8QUhDSSBjaGFu bmVsPiBhdCBjaGFubmVsIDUgb24gYWhjaTEKb2hjaTA6IDxPSENJIChnZW5lcmljKSBVU0IgY29u dHJvbGxlcj4gbWVtIDB4ZjYxMGEwMDAtMHhmNjEwYWZmZiBpcnEgMTggYXQgZGV2aWNlIDE4LjAg b24gcGNpMAp1c2J1czM6IDxPSENJIChnZW5lcmljKSBVU0IgY29udHJvbGxlcj4gb24gb2hjaTAK ZWhjaTA6IDxFSENJIChnZW5lcmljKSBVU0IgMi4wIGNvbnRyb2xsZXI+IG1lbSAweGY2MTA5MDAw LTB4ZjYxMDkwZmYgaXJxIDE3IGF0IGRldmljZSAxOC4yIG9uIHBjaTAKdXNidXM0OiBFSENJIHZl cnNpb24gMS4wCnVzYnVzNDogPEVIQ0kgKGdlbmVyaWMpIFVTQiAyLjAgY29udHJvbGxlcj4gb24g ZWhjaTAKb2hjaTE6IDxPSENJIChnZW5lcmljKSBVU0IgY29udHJvbGxlcj4gbWVtIDB4ZjYxMDgw MDAtMHhmNjEwOGZmZiBpcnEgMjAgYXQgZGV2aWNlIDE5LjAgb24gcGNpMAp1c2J1czU6IDxPSENJ IChnZW5lcmljKSBVU0IgY29udHJvbGxlcj4gb24gb2hjaTEKZWhjaTE6IDxFSENJIChnZW5lcmlj KSBVU0IgMi4wIGNvbnRyb2xsZXI+IG1lbSAweGY2MTA3MDAwLTB4ZjYxMDcwZmYgaXJxIDIxIGF0 IGRldmljZSAxOS4yIG9uIHBjaTAKdXNidXM2OiBFSENJIHZlcnNpb24gMS4wCnVzYnVzNjogPEVI Q0kgKGdlbmVyaWMpIFVTQiAyLjAgY29udHJvbGxlcj4gb24gZWhjaTEKcGNpMDogPHNlcmlhbCBi dXMsIFNNQnVzPiBhdCBkZXZpY2UgMjAuMCAobm8gZHJpdmVyIGF0dGFjaGVkKQpoZGFjMTogPEFU SSBTQjYwMCBIaWdoIERlZmluaXRpb24gQXVkaW8gQ29udHJvbGxlcj4gbWVtIDB4ZjYxMDAwMDAt MHhmNjEwM2ZmZiBpcnEgMTYgYXQgZGV2aWNlIDIwLjIgb24gcGNpMAppc2FiMDogPFBDSS1JU0Eg YnJpZGdlPiBhdCBkZXZpY2UgMjAuMyBvbiBwY2kwCmlzYTA6IDxJU0EgYnVzPiBvbiBpc2FiMApw Y2liOTogPEFDUEkgUENJLVBDSSBicmlkZ2U+IGF0IGRldmljZSAyMC40IG9uIHBjaTAKcGNpOTog PEFDUEkgUENJIGJ1cz4gb24gcGNpYjkKZW0xOiA8SW50ZWwoUikgUFJPLzEwMDAgTGVnYWN5IE5l dHdvcmsgQ29ubmVjdGlvbiAxLjAuMz4gcG9ydCAweGEwMDAtMHhhMDNmIG1lbSAweGY2ODQwMDAw LTB4ZjY4NWZmZmYsMHhmNjgyMDAwMC0weGY2ODNmZmZmIGlycSAyMCBhdCBkZXZpY2UgNS4wIG9u IHBjaTkKZW0xOiBFdGhlcm5ldCBhZGRyZXNzOiAwMDoxYjoyMTpiODphNzozZgpvaGNpMjogPE9I Q0kgKGdlbmVyaWMpIFVTQiBjb250cm9sbGVyPiBtZW0gMHhmNjEwNjAwMC0weGY2MTA2ZmZmIGly cSAxOCBhdCBkZXZpY2UgMjAuNSBvbiBwY2kwCnVzYnVzNzogPE9IQ0kgKGdlbmVyaWMpIFVTQiBj b250cm9sbGVyPiBvbiBvaGNpMgpvaGNpMzogPE9IQ0kgKGdlbmVyaWMpIFVTQiBjb250cm9sbGVy PiBtZW0gMHhmNjEwNTAwMC0weGY2MTA1ZmZmIGlycSAyMiBhdCBkZXZpY2UgMjIuMCBvbiBwY2kw CnVzYnVzODogPE9IQ0kgKGdlbmVyaWMpIFVTQiBjb250cm9sbGVyPiBvbiBvaGNpMwplaGNpMjog PEVIQ0kgKGdlbmVyaWMpIFVTQiAyLjAgY29udHJvbGxlcj4gbWVtIDB4ZjYxMDQwMDAtMHhmNjEw NDBmZiBpcnEgMjMgYXQgZGV2aWNlIDIyLjIgb24gcGNpMAp1c2J1czk6IEVIQ0kgdmVyc2lvbiAx LjAKdXNidXM5OiA8RUhDSSAoZ2VuZXJpYykgVVNCIDIuMCBjb250cm9sbGVyPiBvbiBlaGNpMgph Y3BpX2J1dHRvbjA6IDxQb3dlciBCdXR0b24+IG9uIGFjcGkwCmF0dGltZXIwOiA8QVQgdGltZXI+ IHBvcnQgMHg0MC0weDQzIGlycSAwIG9uIGFjcGkwClRpbWVjb3VudGVyICJpODI1NCIgZnJlcXVl bmN5IDExOTMxODIgSHogcXVhbGl0eSAwCkV2ZW50IHRpbWVyICJpODI1NCIgZnJlcXVlbmN5IDEx OTMxODIgSHogcXVhbGl0eSAxMDAKYXRydGMwOiA8QVQgcmVhbHRpbWUgY2xvY2s+IHBvcnQgMHg3 MC0weDcxIGlycSA4IG9uIGFjcGkwCkV2ZW50IHRpbWVyICJSVEMiIGZyZXF1ZW5jeSAzMjc2OCBI eiBxdWFsaXR5IDAKb3JtMDogPElTQSBPcHRpb24gUk9Ncz4gYXQgaW9tZW0gMHhjZTAwMC0weGQz N2ZmLDB4ZDM4MDAtMHhkNDdmZiBvbiBpc2EwCnNjMDogPFN5c3RlbSBjb25zb2xlPiBhdCBmbGFn cyAweDEwMCBvbiBpc2EwCnNjMDogVkdBIDwxNiB2aXJ0dWFsIGNvbnNvbGVzLCBmbGFncz0weDMw MD4KdmdhMDogPEdlbmVyaWMgSVNBIFZHQT4gYXQgcG9ydCAweDNjMC0weDNkZiBpb21lbSAweGEw MDAwLTB4YmZmZmYgb24gaXNhMAphdGtiZGMwOiA8S2V5Ym9hcmQgY29udHJvbGxlciAoaTgwNDIp PiBhdCBwb3J0IDB4NjAsMHg2NCBvbiBpc2EwCmF0a2JkMDogPEFUIEtleWJvYXJkPiBpcnEgMSBv biBhdGtiZGMwCmtiZDAgYXQgYXRrYmQwCmF0a2JkMDogW0dJQU5ULUxPQ0tFRF0KcHBjMDogY2Fu bm90IHJlc2VydmUgSS9PIHBvcnQgcmFuZ2UKYWNwaV90aHJvdHRsZTA6IDxBQ1BJIENQVSBUaHJv dHRsaW5nPiBvbiBjcHUwCmh3cHN0YXRlMDogPENvb2xgbidRdWlldCAyLjA+IG9uIGNwdTAKYWNw aV90aHJvdHRsZTE6IDxBQ1BJIENQVSBUaHJvdHRsaW5nPiBvbiBjcHUxCmFjcGlfdGhyb3R0bGUx OiBmYWlsZWQgdG8gYXR0YWNoIFBfQ05UCmRldmljZV9hdHRhY2g6IGFjcGlfdGhyb3R0bGUxIGF0 dGFjaCByZXR1cm5lZCA2CmFjcGlfdGhyb3R0bGUyOiA8QUNQSSBDUFUgVGhyb3R0bGluZz4gb24g Y3B1MgphY3BpX3Rocm90dGxlMjogZmFpbGVkIHRvIGF0dGFjaCBQX0NOVApkZXZpY2VfYXR0YWNo OiBhY3BpX3Rocm90dGxlMiBhdHRhY2ggcmV0dXJuZWQgNgphY3BpX3Rocm90dGxlMzogPEFDUEkg Q1BVIFRocm90dGxpbmc+IG9uIGNwdTMKYWNwaV90aHJvdHRsZTM6IGZhaWxlZCB0byBhdHRhY2gg UF9DTlQKZGV2aWNlX2F0dGFjaDogYWNwaV90aHJvdHRsZTMgYXR0YWNoIHJldHVybmVkIDYKYWNw aV90aHJvdHRsZTQ6IDxBQ1BJIENQVSBUaHJvdHRsaW5nPiBvbiBjcHU0CmFjcGlfdGhyb3R0bGU0 OiBmYWlsZWQgdG8gYXR0YWNoIFBfQ05UCmRldmljZV9hdHRhY2g6IGFjcGlfdGhyb3R0bGU0IGF0 dGFjaCByZXR1cm5lZCA2CmFjcGlfdGhyb3R0bGU1OiA8QUNQSSBDUFUgVGhyb3R0bGluZz4gb24g Y3B1NQphY3BpX3Rocm90dGxlNTogZmFpbGVkIHRvIGF0dGFjaCBQX0NOVApkZXZpY2VfYXR0YWNo OiBhY3BpX3Rocm90dGxlNSBhdHRhY2ggcmV0dXJuZWQgNgpaRlMgZmlsZXN5c3RlbSB2ZXJzaW9u IDUKWkZTIHN0b3JhZ2UgcG9vbCB2ZXJzaW9uIDI4ClRpbWVjb3VudGVycyB0aWNrIGV2ZXJ5IDEu MDAwIG1zZWMKdmJveGRydjogZkFzeW5jPTAgb2ZmTWluPTB4MzEyIG9mZk1heD0weDVkOApoZGFj MDogSERBIENvZGVjICMwOiBOVmlkaWEgKFVua25vd24pCmhkYWMwOiBIREEgQ29kZWMgIzE6IE5W aWRpYSAoVW5rbm93bikKaGRhYzA6IEhEQSBDb2RlYyAjMjogTlZpZGlhIChVbmtub3duKQpoZGFj MDogSERBIENvZGVjICMzOiBOVmlkaWEgKFVua25vd24pCnBjbTA6IDxIREEgTlZpZGlhIChVbmtu b3duKSBQQ00gIzAgRGlzcGxheVBvcnQ+IGF0IGNhZCAwIG5pZCAxIG9uIGhkYWMwCnBjbTE6IDxI REEgTlZpZGlhIChVbmtub3duKSBQQ00gIzAgRGlzcGxheVBvcnQ+IGF0IGNhZCAxIG5pZCAxIG9u IGhkYWMwCnBjbTI6IDxIREEgTlZpZGlhIChVbmtub3duKSBQQ00gIzAgRGlzcGxheVBvcnQ+IGF0 IGNhZCAyIG5pZCAxIG9uIGhkYWMwCnBjbTM6IDxIREEgTlZpZGlhIChVbmtub3duKSBQQ00gIzAg RGlzcGxheVBvcnQ+IGF0IGNhZCAzIG5pZCAxIG9uIGhkYWMwCmhkYWMxOiBIREEgQ29kZWMgIzA6 IFJlYWx0ZWsgQUxDODg5CnBjbTQ6IDxIREEgUmVhbHRlayBBTEM4ODkgUENNICMwIEFuYWxvZz4g YXQgY2FkIDAgbmlkIDEgb24gaGRhYzEKcGNtNTogPEhEQSBSZWFsdGVrIEFMQzg4OSBQQ00gIzEg QW5hbG9nPiBhdCBjYWQgMCBuaWQgMSBvbiBoZGFjMQpwY202OiA8SERBIFJlYWx0ZWsgQUxDODg5 IFBDTSAjMiBEaWdpdGFsPiBhdCBjYWQgMCBuaWQgMSBvbiBoZGFjMQpwY203OiA8SERBIFJlYWx0 ZWsgQUxDODg5IFBDTSAjMyBEaWdpdGFsPiBhdCBjYWQgMCBuaWQgMSBvbiBoZGFjMQp1c2J1czA6 IDUuMEdicHMgU3VwZXIgU3BlZWQgVVNCIHYzLjAKdXNidXMxOiA1LjBHYnBzIFN1cGVyIFNwZWVk IFVTQiB2My4wCnVzYnVzMjogNS4wR2JwcyBTdXBlciBTcGVlZCBVU0IgdjMuMAp1c2J1czM6IDEy TWJwcyBGdWxsIFNwZWVkIFVTQiB2MS4wCnVzYnVzNDogNDgwTWJwcyBIaWdoIFNwZWVkIFVTQiB2 Mi4wCnVzYnVzNTogMTJNYnBzIEZ1bGwgU3BlZWQgVVNCIHYxLjAKdXNidXM2OiA0ODBNYnBzIEhp Z2ggU3BlZWQgVVNCIHYyLjAKdXNidXM3OiAxMk1icHMgRnVsbCBTcGVlZCBVU0IgdjEuMAp1c2J1 czg6IDEyTWJwcyBGdWxsIFNwZWVkIFVTQiB2MS4wCnVzYnVzOTogNDgwTWJwcyBIaWdoIFNwZWVk IFVTQiB2Mi4wCnVnZW4wLjE6IDwweDFiMjE+IGF0IHVzYnVzMAp1aHViMDogPDB4MWIyMSBYSENJ IHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJldiAzLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXMwCnVn ZW4xLjE6IDwweDFiMjE+IGF0IHVzYnVzMQp1aHViMTogPDB4MWIyMSBYSENJIHJvb3QgSFVCLCBj bGFzcyA5LzAsIHJldiAzLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXMxCnVnZW4yLjE6IDwweDFi MjE+IGF0IHVzYnVzMgp1aHViMjogPDB4MWIyMSBYSENJIHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJl diAzLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXMyCnVnZW4zLjE6IDxBVEk+IGF0IHVzYnVzMwp1 aHViMzogPEFUSSBPSENJIHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJldiAxLjAwLzEuMDAsIGFkZHIg MT4gb24gdXNidXMzCnVnZW40LjE6IDxBVEk+IGF0IHVzYnVzNAp1aHViNDogPEFUSSBFSENJIHJv b3QgSFVCLCBjbGFzcyA5LzAsIHJldiAyLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXM0CnVnZW41 LjE6IDxBVEk+IGF0IHVzYnVzNQp1aHViNTogPEFUSSBPSENJIHJvb3QgSFVCLCBjbGFzcyA5LzAs IHJldiAxLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXM1CnVnZW42LjE6IDxBVEk+IGF0IHVzYnVz Ngp1aHViNjogPEFUSSBFSENJIHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJldiAyLjAwLzEuMDAsIGFk ZHIgMT4gb24gdXNidXM2CnVnZW43LjE6IDxBVEk+IGF0IHVzYnVzNwp1aHViNzogPEFUSSBPSENJ IHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJldiAxLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXM3CnVn ZW44LjE6IDxBVEk+IGF0IHVzYnVzOAp1aHViODogPEFUSSBPSENJIHJvb3QgSFVCLCBjbGFzcyA5 LzAsIHJldiAxLjAwLzEuMDAsIGFkZHIgMT4gb24gdXNidXM4CnVnZW45LjE6IDxBVEk+IGF0IHVz YnVzOQp1aHViOTogPEFUSSBFSENJIHJvb3QgSFVCLCBjbGFzcyA5LzAsIHJldiAyLjAwLzEuMDAs IGFkZHIgMT4gb24gdXNidXM5CnVodWI3OiAyIHBvcnRzIHdpdGggMiByZW1vdmFibGUsIHNlbGYg cG93ZXJlZAp1aHViODogNCBwb3J0cyB3aXRoIDQgcmVtb3ZhYmxlLCBzZWxmIHBvd2VyZWQKdWh1 YjM6IDUgcG9ydHMgd2l0aCA1IHJlbW92YWJsZSwgc2VsZiBwb3dlcmVkCnVodWI1OiA1IHBvcnRz IHdpdGggNSByZW1vdmFibGUsIHNlbGYgcG93ZXJlZAp1aHViMDogNCBwb3J0cyB3aXRoIDQgcmVt b3ZhYmxlLCBzZWxmIHBvd2VyZWQKdWh1YjE6IDQgcG9ydHMgd2l0aCA0IHJlbW92YWJsZSwgc2Vs ZiBwb3dlcmVkCnVodWIyOiA0IHBvcnRzIHdpdGggNCByZW1vdmFibGUsIHNlbGYgcG93ZXJlZAp1 aHViOTogNCBwb3J0cyB3aXRoIDQgcmVtb3ZhYmxlLCBzZWxmIHBvd2VyZWQKdWh1YjQ6IDUgcG9y dHMgd2l0aCA1IHJlbW92YWJsZSwgc2VsZiBwb3dlcmVkCnVodWI2OiA1IHBvcnRzIHdpdGggNSBy ZW1vdmFibGUsIHNlbGYgcG93ZXJlZAp1Z2VuOS4yOiA8R2VuZXJpYz4gYXQgdXNidXM5CnVtYXNz MDogPEJ1bGstSW4sIEJ1bGstT3V0LCBJbnRlcmZhY2U+IG9uIHVzYnVzOQp1bWFzczA6ICBTQ1NJ IG92ZXIgQnVsay1Pbmx5OyBxdWlya3MgPSAweDQwMDAKdW1hc3MwOjk6MDotMTogQXR0YWNoZWQg dG8gc2NidXM5CnVnZW41LjI6IDxDSElDT05ZPiBhdCB1c2J1czUKdWtiZDA6IDxDSElDT05ZIEhQ IEJhc2ljIFVTQiBLZXlib2FyZCwgY2xhc3MgMC8wLCByZXYgMS4xMC8zLjAwLCBhZGRyIDI+IG9u IHVzYnVzNQprYmQyIGF0IHVrYmQwCihwcm9iZTI1NTp1bWFzcy1zaW0wOjA6MDowKTogVEVTVCBV TklUIFJFQURZLiBDREI6IDAgMCAwIDAgMCAwIAoocHJvYmUyNTU6dW1hc3Mtc2ltMDowOjA6MCk6 IENBTSBzdGF0dXM6IFNDU0kgU3RhdHVzIEVycm9yCihwcm9iZTI1NTp1bWFzcy1zaW0wOjA6MDow KTogU0NTSSBzdGF0dXM6IENoZWNrIENvbmRpdGlvbgoocHJvYmUyNTU6dW1hc3Mtc2ltMDowOjA6 MCk6IFNDU0kgc2Vuc2U6IE5PVCBSRUFEWSBhc2M6M2EsMCAoTWVkaXVtIG5vdCBwcmVzZW50KQoo cHJvYmU1OnVtYXNzLXNpbTA6MDowOjEpOiBURVNUIFVOSVQgUkVBRFkuIENEQjogMCAyMCAwIDAg MCAwIAoocHJvYmU1OnVtYXNzLXNpbTA6MDowOjEpOiBDQU0gc3RhdHVzOiBTQ1NJIFN0YXR1cyBF cnJvcgoocHJvYmU1OnVtYXNzLXNpbTA6MDowOjEpOiBTQ1NJIHN0YXR1czogQ2hlY2sgQ29uZGl0 aW9uCihwcm9iZTU6dW1hc3Mtc2ltMDowOjA6MSk6IFNDU0kgc2Vuc2U6IE5PVCBSRUFEWSBhc2M6 M2EsMCAoTWVkaXVtIG5vdCBwcmVzZW50KQoocHJvYmU1OnVtYXNzLXNpbTA6MDowOjIpOiBURVNU IFVOSVQgUkVBRFkuIENEQjogMCA0MCAwIDAgMCAwIAoocHJvYmU1OnVtYXNzLXNpbTA6MDowOjIp OiBDQU0gc3RhdHVzOiBTQ1NJIFN0YXR1cyBFcnJvcgoocHJvYmU1OnVtYXNzLXNpbTA6MDowOjIp OiBTQ1NJIHN0YXR1czogQ2hlY2sgQ29uZGl0aW9uCihwcm9iZTU6dW1hc3Mtc2ltMDowOjA6Mik6 IFNDU0kgc2Vuc2U6IE5PVCBSRUFEWSBhc2M6M2EsMCAoTWVkaXVtIG5vdCBwcmVzZW50KQoocHJv YmUwOnVtYXNzLXNpbTA6MDowOjMpOiBURVNUIFVOSVQgUkVBRFkuIENEQjogMCA2MCAwIDAgMCAw IAoocHJvYmUwOnVtYXNzLXNpbTA6MDowOjMpOiBDQU0gc3RhdHVzOiBTQ1NJIFN0YXR1cyBFcnJv cgoocHJvYmUwOnVtYXNzLXNpbTA6MDowOjMpOiBTQ1NJIHN0YXR1czogQ2hlY2sgQ29uZGl0aW9u Cihwcm9iZTA6dW1hc3Mtc2ltMDowOjA6Myk6IFNDU0kgc2Vuc2U6IE5PVCBSRUFEWSBhc2M6M2Es MCAoTWVkaXVtIG5vdCBwcmVzZW50KQphZGEwIGF0IGFoY2ljaDIgYnVzIDAgc2NidXMzIHRhcmdl dCAwIGx1biAwCmFkYTA6IDxXREMgV0Q3NTAyQUFFWC0wMFk5QTAgMDUuMDFEMDU+IEFUQS04IFNB VEEgMy54IGRldmljZQphZGEwOiA2MDAuMDAwTUIvcyB0cmFuc2ZlcnMgKFNBVEEgMy54LCBVRE1B NiwgUElPIDgxOTJieXRlcykKYWRhMDogQ29tbWFuZCBRdWV1ZWluZyBlbmFibGVkCmFkYTA6IDcx NTQwNE1CICgxNDY1MTQ5MTY4IDUxMiBieXRlIHNlY3RvcnM6IDE2SCA2M1MvVCAxNjM4M0MpCmFk YTA6IFByZXZpb3VzbHkgd2FzIGtub3duIGFzIGFkOAphZGExIGF0IGFoY2ljaDMgYnVzIDAgc2Ni dXM0IHRhcmdldCAwIGx1biAwCmFkYTE6IDxXREMgV0Q3NTAyQUFFWC0wMFk5QTAgMDUuMDFEMDU+ IEFUQS04IFNBVEEgMy54IGRldmljZQphZGExOiA2MDAuMDAwTUIvcyB0cmFuc2ZlcnMgKFNBVEEg My54LCBVRE1BNiwgUElPIDgxOTJieXRlcykKYWRhMTogQ29tbWFuZCBRdWV1ZWluZyBlbmFibGVk CmFkYTE6IDcxNTQwNE1CICgxNDY1MTQ5MTY4IDUxMiBieXRlIHNlY3RvcnM6IDE2SCA2M1MvVCAx NjM4M0MpCmFkYTE6IFByZXZpb3VzbHkgd2FzIGtub3duIGFzIGFkMTAKYWRhMiBhdCBhaGNpY2g0 IGJ1cyAwIHNjYnVzNSB0YXJnZXQgMCBsdW4gMAphZGEyOiA8V0RDIFdENzUwMkFBRVgtMDBZOUEw IDA1LjAxRDA1PiBBVEEtOCBTQVRBIDMueCBkZXZpY2UKYWRhMjogNjAwLjAwME1CL3MgdHJhbnNm ZXJzIChTQVRBIDMueCwgVURNQTYsIFBJTyA4MTkyYnl0ZXMpCmFkYTI6IENvbW1hbmQgUXVldWVp bmcgZW5hYmxlZAphZGEyOiA3MTU0MDRNQiAoMTQ2NTE0OTE2OCA1MTIgYnl0ZSBzZWN0b3JzOiAx NkggNjNTL1QgMTYzODNDKQphZGEyOiBQcmV2aW91c2x5IHdhcyBrbm93biBhcyBhZDEyCmFkYTMg YXQgYWhjaWNoNSBidXMgMCBzY2J1czYgdGFyZ2V0IDAgbHVuIDAKYWRhMzogPFdEQyBXRDc1MDJB QUVYLTAwWTlBMCAwNS4wMUQwNT4gQVRBLTggU0FUQSAzLnggZGV2aWNlCmFkYTM6IDYwMC4wMDBN Qi9zIHRyYW5zZmVycyAoU0FUQSAzLngsIFVETUE2LCBQSU8gODE5MmJ5dGVzKQphZGEzOiBDb21t YW5kIFF1ZXVlaW5nIGVuYWJsZWQKYWRhMzogNzE1NDA0TUIgKDE0NjUxNDkxNjggNTEyIGJ5dGUg c2VjdG9yczogMTZIIDYzUy9UIDE2MzgzQykKYWRhMzogUHJldmlvdXNseSB3YXMga25vd24gYXMg YWQxNAphZGE0IGF0IGFoY2ljaDYgYnVzIDAgc2NidXM3IHRhcmdldCAwIGx1biAwCmFkYTQ6IDxX REMgV0Q3NTAyQUFFWC0wMFk5QTAgMDUuMDFEMDU+IEFUQS04IFNBVEEgMy54IGRldmljZQphZGE0 OiA2MDAuMDAwTUIvcyB0cmFuc2ZlcnMgKFNBVEEgMy54LCBVRE1BNiwgUElPIDgxOTJieXRlcykK YWRhNDogQ29tbWFuZCBRdWV1ZWluZyBlbmFibGVkCmFkYTQ6IDcxNTQwNE1CICgxNDY1MTQ5MTY4 IDUxMiBieXRlIHNlY3RvcnM6IDE2SCA2M1MvVCAxNjM4M0MpCmFkYTQ6IFByZXZpb3VzbHkgd2Fz IGtub3duIGFzIGFkMTYKYWRhNSBhdCBhaGNpY2g3IGJ1cyAwIHNjYnVzOCB0YXJnZXQgMCBsdW4g MAphZGE1OiA8V0RDIFdENzUwMkFBRVgtMDBZOUEwIDA1LjAxRDA1PiBBVEEtOCBTQVRBIDMueCBk ZXZpY2UKYWRhNTogNjAwLjAwME1CL3MgdHJhbnNmZXJzIChTQVRBIDMueCwgVURNQTYsIFBJTyA4 MTkyYnl0ZXMpCmFkYTU6IENvbW1hbmQgUXVldWVpbmcgZW5hYmxlZAphZGE1OiA3MTU0MDRNQiAo MTQ2NTE0OTE2OCA1MTIgYnl0ZSBzZWN0b3JzOiAxNkggNjNTL1QgMTYzODNDKQphZGE1OiBQcmV2 aW91c2x5IHdhcyBrbm93biBhcyBhZDE4CmRhMCBhdCBtcHMwIGJ1cyAwIHNjYnVzMiB0YXJnZXQg MCBsdW4gMApkYTA6IDxBVEEgV0RDIFdENzUwMkFBRVgtMCAxRDA1PiBGaXhlZCBEaXJlY3QgQWNj ZXNzIFNDU0ktNSBkZXZpY2UgCmRhMDogNjAwLjAwME1CL3MgdHJhbnNmZXJzCmRhMDogQ29tbWFu ZCBRdWV1ZWluZyBlbmFibGVkCmRhMDogNzE1NDA0TUIgKDE0NjUxNDkxNjggNTEyIGJ5dGUgc2Vj dG9yczogMjU1SCA2M1MvVCA5MTIwMUMpCmRhMSBhdCBtcHMwIGJ1cyAwIHNjYnVzMiB0YXJnZXQg MSBsdW4gMApkYTE6IDxBVEEgV0RDIFdENzUwMkFBRVgtMCAxRDA1PiBGaXhlZCBEaXJlY3QgQWNj ZXNzIFNDU0ktNSBkZXZpY2UgCmRhMTogNjAwLjAwME1CL3MgdHJhbnNmZXJzCmRhMTogQ29tbWFu ZCBRdWV1ZWluZyBlbmFibGVkCmRhMTogNzE1NDA0TUIgKDE0NjUxNDkxNjggNTEyIGJ5dGUgc2Vj dG9yczogMjU1SCA2M1MvVCA5MTIwMUMpCmRhMiBhdCBtcHMwIGJ1cyAwIHNjYnVzMiB0YXJnZXQg MiBsdW4gMApkYTI6IDxBVEEgV0RDIFdENzUwMkFBRVgtMCAxRDA1PiBGaXhlZCBEaXJlY3QgQWNj ZXNzIFNDU0ktNSBkZXZpY2UgCmRhMjogNjAwLjAwME1CL3MgdHJhbnNmZXJzCmRhMjogQ29tbWFu ZCBRdWV1ZWluZyBlbmFibGVkCmRhMjogNzE1NDA0TUIgKDE0NjUxNDkxNjggNTEyIGJ5dGUgc2Vj dG9yczogMjU1SCA2M1MvVCA5MTIwMUMpCmRhMyBhdCBtcHMwIGJ1cyAwIHNjYnVzMiB0YXJnZXQg MyBsdW4gMApkYTM6IDxBVEEgV0RDIFdENzUwMkFBRVgtMCAxRDA1PiBGaXhlZCBEaXJlY3QgQWNj ZXNzIFNDU0ktNSBkZXZpY2UgCmRhMzogNjAwLjAwME1CL3MgdHJhbnNmZXJzCmRhMzogQ29tbWFu ZCBRdWV1ZWluZyBlbmFibGVkCmRhMzogNzE1NDA0TUIgKDE0NjUxNDkxNjggNTEyIGJ5dGUgc2Vj dG9yczogMjU1SCA2M1MvVCA5MTIwMUMpCmRhNCBhdCBtcHMwIGJ1cyAwIHNjYnVzMiB0YXJnZXQg NCBsdW4gMApkYTQ6IDxBVEEgV0RDIFdENzUwMkFBRVgtMCAxRDA1PiBGaXhlZCBEaXJlY3QgQWNj ZXNzIFNDU0ktNSBkZXZpY2UgCmRhNDogNjAwLjAwME1CL3MgdHJhbnNmZXJzCmRhNDogQ29tbWFu ZCBRdWV1ZWluZyBlbmFibGVkCmRhNDogNzE1NDA0TUIgKDE0NjUxNDkxNjggNTEyIGJ5dGUgc2Vj dG9yczogMjU1SCA2M1MvVCA5MTIwMUMpCmRhNSBhdCBtcHMwIGJ1cyAwIHNjYnVzMiB0YXJnZXQg NyBsdW4gMApkYTU6IDxBVEEgV0RDIFdENzUwMkFBRVgtMCAxRDA1PiBGaXhlZCBEaXJlY3QgQWNj ZXNzIFNDU0ktNSBkZXZpY2UgCmRhNTogNjAwLjAwME1CL3MgdHJhbnNmZXJzCmRhNTogQ29tbWFu ZCBRdWV1ZWluZyBlbmFibGVkCmRhNTogNzE1NDA0TUIgKDE0NjUxNDkxNjggNTEyIGJ5dGUgc2Vj dG9yczogMjU1SCA2M1MvVCA5MTIwMUMpCmRhNiBhdCB1bWFzcy1zaW0wIGJ1cyAwIHNjYnVzOSB0 YXJnZXQgMCBsdW4gMApkYTY6IDxHZW5lcmljLSBDb21wYWN0IEZsYXNoIDEuMDA+IFJlbW92YWJs ZSBEaXJlY3QgQWNjZXNzIFNDU0ktMCBkZXZpY2UgCmRhNjogNDAuMDAwTUIvcyB0cmFuc2ZlcnMK ZGE2OiBBdHRlbXB0IHRvIHF1ZXJ5IGRldmljZSBzaXplIGZhaWxlZDogTk9UIFJFQURZLCBNZWRp dW0gbm90IHByZXNlbnQKZGE3IGF0IHVtYXNzLXNpbTAgYnVzIDAgc2NidXM5IHRhcmdldCAwIGx1 biAxCmRhNzogPEdlbmVyaWMtIFNNL3hELVBpY3R1cmUgMS4wMD4gUmVtb3ZhYmxlIERpcmVjdCBB Y2Nlc3MgU0NTSS0wIGRldmljZSAKZGE3OiA0MC4wMDBNQi9zIHRyYW5zZmVycwpkYTc6IEF0dGVt cHQgdG8gcXVlcnkgZGV2aWNlIHNpemUgZmFpbGVkOiBOT1QgUkVBRFksIE1lZGl1bSBub3QgcHJl c2VudApkYTggYXQgdW1hc3Mtc2ltMCBidXMgMCBzY2J1czkgdGFyZ2V0IDAgbHVuIDIKZGE4OiA8 R2VuZXJpYy0gU0QvTU1DIDEuMDA+IFJlbW92YWJsZSBEaXJlY3QgQWNjZXNzIFNDU0ktMCBkZXZp Y2UgCmRhODogNDAuMDAwTUIvcyB0cmFuc2ZlcnMKZGE4OiBBdHRlbXB0IHRvIHF1ZXJ5IGRldmlj ZSBzaXplIGZhaWxlZDogTk9UIFJFQURZLCBNZWRpdW0gbm90IHByZXNlbnQKZGE5IGF0IHVtYXNz LXNpbTAgYnVzIDAgc2NidXM5IHRhcmdldCAwIGx1biAzCmRhOTogPEdlbmVyaWMtIE1TL01TLVBy byAxLjAwPiBSZW1vdmFibGUgRGlyZWN0IEFjY2VzcyBTQ1NJLTAgZGV2aWNlIApkYTk6IDQwLjAw ME1CL3MgdHJhbnNmZXJzCmRhOTogQXR0ZW1wdCB0byBxdWVyeSBkZXZpY2Ugc2l6ZSBmYWlsZWQ6 IE5PVCBSRUFEWSwgTWVkaXVtIG5vdCBwcmVzZW50CnVnZW41LjM6IDxNaWNyb3NvZnQ+IGF0IHVz YnVzNQp1bXMwOiA8TWljcm9zb2Z0IE1pY3Jvc29mdCBTaWRlV2luZGVyIFgzIE1vdXNlLCBjbGFz cyAwLzAsIHJldiAyLjAwLzQuMzAsIGFkZHIgMz4gb24gdXNidXM1CnVtczA6IDUgYnV0dG9ucyBh bmQgW1hZWlRdIGNvb3JkaW5hdGVzIElEPTI2ClNNUDogQVAgQ1BVICMxIExhdW5jaGVkIQpTTVA6 IEFQIENQVSAjMyBMYXVuY2hlZCEKU01QOiBBUCBDUFUgIzIgTGF1bmNoZWQhClNNUDogQVAgQ1BV ICM0IExhdW5jaGVkIQpTTVA6IEFQIENQVSAjNSBMYXVuY2hlZCEKVGltZWNvdW50ZXIgIlRTQy1s b3ciIGZyZXF1ZW5jeSAxNDUwMTkxMiBIeiBxdWFsaXR5IDgwMApUcnlpbmcgdG8gbW91bnQgcm9v dCBmcm9tIHpmczp6cm9vdCBbXS4uLgo= --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="gpart-da0.out" Content-Description: gpart-da0.out Content-Disposition: attachment; filename="gpart-da0.out"; size=1246; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:27:53 GMT" Content-Transfer-Encoding: base64 R2VvbSBuYW1lOiBkYTAKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAyNTUKZndz ZWN0b3JzOiA2MwpsYXN0OiAxNDY1MTQ5MTM0CmZpcnN0OiAzNAplbnRyaWVzOiAxMjgKc2NoZW1l OiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBkYTBwMQogICBNZWRpYXNpemU6IDY1NTM2ICg2NGsp CiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogMTc0 MDgKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGJlODBjY2M4LWVlYzQtMTFlMC1hYWUzLTE0 ZGFlOWM4YWY2MgogICByYXd0eXBlOiA4M2JkNmI5ZC03ZjQxLTExZGMtYmUwYi0wMDE1NjBiODRm MGYKICAgbGFiZWw6IChudWxsKQogICBsZW5ndGg6IDY1NTM2CiAgIG9mZnNldDogMTc0MDgKICAg dHlwZTogZnJlZWJzZC1ib290CiAgIGluZGV4OiAxCiAgIGVuZDogMTYxCiAgIHN0YXJ0OiAzNAoy LiBOYW1lOiBkYTBwMgogICBNZWRpYXNpemU6IDM0MzU5NzM4MzY4ICgzMkcpCiAgIFNlY3RvcnNp emU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogODI5NDQKICAgTW9kZTog cjB3MGUwCiAgIHJhd3V1aWQ6IGQ1NzdlNDZmLWVlYzQtMTFlMC1hYWUzLTE0ZGFlOWM4YWY2Mgog ICByYXd0eXBlOiA1MTZlN2NiNS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6 IHN3YXAzCiAgIGxlbmd0aDogMzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA4Mjk0NAogICB0eXBlOiBm cmVlYnNkLXN3YXAKICAgaW5kZXg6IDIKICAgZW5kOiA2NzEwOTAyNQogICBzdGFydDogMTYyCjMu IE5hbWU6IGRhMHAzCiAgIE1lZGlhc2l6ZTogNzE1Nzk2NTM1ODA4ICg2NjZHKQogICBTZWN0b3Jz aXplOiA1MTIKICAgU3RyaXBlc2l6ZTogMAogICBTdHJpcGVvZmZzZXQ6IDgyOTQ0CiAgIE1vZGU6 IHIwdzBlMAogICByYXd1dWlkOiBlYjg3MTcxMS1lZWM0LTExZTAtYWFlMy0xNGRhZTljOGFmNjIK ICAgcmF3dHlwZTogNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCiAgIGxhYmVs OiBkaXNrMwogICBsZW5ndGg6IDcxNTc5NjUzNTgwOAogICBvZmZzZXQ6IDM0MzU5ODIxMzEyCiAg IHR5cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAzCiAgIGVuZDogMTQ2NTE0OTEzNAogICBzdGFy dDogNjcxMDkwMjYKQ29uc3VtZXJzOgoxLiBOYW1lOiBkYTAKICAgTWVkaWFzaXplOiA3NTAxNTYz NzQwMTYgKDY5OEcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMHcwZTAKCg== --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="gpart-da1.out" Content-Description: gpart-da1.out Content-Disposition: attachment; filename="gpart-da1.out"; size=1246; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:27:58 GMT" Content-Transfer-Encoding: base64 R2VvbSBuYW1lOiBkYTEKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAyNTUKZndz ZWN0b3JzOiA2MwpsYXN0OiAxNDY1MTQ5MTM0CmZpcnN0OiAzNAplbnRyaWVzOiAxMjgKc2NoZW1l OiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBkYTFwMQogICBNZWRpYXNpemU6IDY1NTM2ICg2NGsp CiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogMTc0 MDgKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGJmY2NlMjY4LWVlYzQtMTFlMC1hYWUzLTE0 ZGFlOWM4YWY2MgogICByYXd0eXBlOiA4M2JkNmI5ZC03ZjQxLTExZGMtYmUwYi0wMDE1NjBiODRm MGYKICAgbGFiZWw6IChudWxsKQogICBsZW5ndGg6IDY1NTM2CiAgIG9mZnNldDogMTc0MDgKICAg dHlwZTogZnJlZWJzZC1ib290CiAgIGluZGV4OiAxCiAgIGVuZDogMTYxCiAgIHN0YXJ0OiAzNAoy LiBOYW1lOiBkYTFwMgogICBNZWRpYXNpemU6IDM0MzU5NzM4MzY4ICgzMkcpCiAgIFNlY3RvcnNp emU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogODI5NDQKICAgTW9kZTog cjB3MGUwCiAgIHJhd3V1aWQ6IGQ4ZDBlNDQ2LWVlYzQtMTFlMC1hYWUzLTE0ZGFlOWM4YWY2Mgog ICByYXd0eXBlOiA1MTZlN2NiNS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6 IHN3YXA1CiAgIGxlbmd0aDogMzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA4Mjk0NAogICB0eXBlOiBm cmVlYnNkLXN3YXAKICAgaW5kZXg6IDIKICAgZW5kOiA2NzEwOTAyNQogICBzdGFydDogMTYyCjMu IE5hbWU6IGRhMXAzCiAgIE1lZGlhc2l6ZTogNzE1Nzk2NTM1ODA4ICg2NjZHKQogICBTZWN0b3Jz aXplOiA1MTIKICAgU3RyaXBlc2l6ZTogMAogICBTdHJpcGVvZmZzZXQ6IDgyOTQ0CiAgIE1vZGU6 IHIwdzBlMAogICByYXd1dWlkOiBlZWFhNjVjNi1lZWM0LTExZTAtYWFlMy0xNGRhZTljOGFmNjIK ICAgcmF3dHlwZTogNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCiAgIGxhYmVs OiBkaXNrNQogICBsZW5ndGg6IDcxNTc5NjUzNTgwOAogICBvZmZzZXQ6IDM0MzU5ODIxMzEyCiAg IHR5cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAzCiAgIGVuZDogMTQ2NTE0OTEzNAogICBzdGFy dDogNjcxMDkwMjYKQ29uc3VtZXJzOgoxLiBOYW1lOiBkYTEKICAgTWVkaWFzaXplOiA3NTAxNTYz NzQwMTYgKDY5OEcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMHcwZTAKCg== --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="gpart-da2.out" Content-Description: gpart-da2.out Content-Disposition: attachment; filename="gpart-da2.out"; size=1246; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:28:01 GMT" Content-Transfer-Encoding: base64 R2VvbSBuYW1lOiBkYTIKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAyNTUKZndz ZWN0b3JzOiA2MwpsYXN0OiAxNDY1MTQ5MTM0CmZpcnN0OiAzNAplbnRyaWVzOiAxMjgKc2NoZW1l OiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBkYTJwMQogICBNZWRpYXNpemU6IDY1NTM2ICg2NGsp CiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogMTc0 MDgKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGJjZWI3YmRmLWVlYzQtMTFlMC1hYWUzLTE0 ZGFlOWM4YWY2MgogICByYXd0eXBlOiA4M2JkNmI5ZC03ZjQxLTExZGMtYmUwYi0wMDE1NjBiODRm MGYKICAgbGFiZWw6IChudWxsKQogICBsZW5ndGg6IDY1NTM2CiAgIG9mZnNldDogMTc0MDgKICAg dHlwZTogZnJlZWJzZC1ib290CiAgIGluZGV4OiAxCiAgIGVuZDogMTYxCiAgIHN0YXJ0OiAzNAoy LiBOYW1lOiBkYTJwMgogICBNZWRpYXNpemU6IDM0MzU5NzM4MzY4ICgzMkcpCiAgIFNlY3RvcnNp emU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogODI5NDQKICAgTW9kZTog cjB3MGUwCiAgIHJhd3V1aWQ6IGQxYzIxY2JmLWVlYzQtMTFlMC1hYWUzLTE0ZGFlOWM4YWY2Mgog ICByYXd0eXBlOiA1MTZlN2NiNS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6 IHN3YXAxCiAgIGxlbmd0aDogMzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA4Mjk0NAogICB0eXBlOiBm cmVlYnNkLXN3YXAKICAgaW5kZXg6IDIKICAgZW5kOiA2NzEwOTAyNQogICBzdGFydDogMTYyCjMu IE5hbWU6IGRhMnAzCiAgIE1lZGlhc2l6ZTogNzE1Nzk2NTM1ODA4ICg2NjZHKQogICBTZWN0b3Jz aXplOiA1MTIKICAgU3RyaXBlc2l6ZTogMAogICBTdHJpcGVvZmZzZXQ6IDgyOTQ0CiAgIE1vZGU6 IHIwdzBlMAogICByYXd1dWlkOiBlOGEzNDVkMS1lZWM0LTExZTAtYWFlMy0xNGRhZTljOGFmNjIK ICAgcmF3dHlwZTogNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCiAgIGxhYmVs OiBkaXNrMQogICBsZW5ndGg6IDcxNTc5NjUzNTgwOAogICBvZmZzZXQ6IDM0MzU5ODIxMzEyCiAg IHR5cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAzCiAgIGVuZDogMTQ2NTE0OTEzNAogICBzdGFy dDogNjcxMDkwMjYKQ29uc3VtZXJzOgoxLiBOYW1lOiBkYTIKICAgTWVkaWFzaXplOiA3NTAxNTYz NzQwMTYgKDY5OEcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMHcwZTAKCg== --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="gpart-da3.out" Content-Description: gpart-da3.out Content-Disposition: attachment; filename="gpart-da3.out"; size=1246; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:28:04 GMT" Content-Transfer-Encoding: base64 R2VvbSBuYW1lOiBkYTMKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAyNTUKZndz ZWN0b3JzOiA2MwpsYXN0OiAxNDY1MTQ5MTM0CmZpcnN0OiAzNAplbnRyaWVzOiAxMjgKc2NoZW1l OiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBkYTNwMQogICBNZWRpYXNpemU6IDY1NTM2ICg2NGsp CiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogMTc0 MDgKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGJhZjIxYjVhLWVlYzQtMTFlMC1hYWUzLTE0 ZGFlOWM4YWY2MgogICByYXd0eXBlOiA4M2JkNmI5ZC03ZjQxLTExZGMtYmUwYi0wMDE1NjBiODRm MGYKICAgbGFiZWw6IChudWxsKQogICBsZW5ndGg6IDY1NTM2CiAgIG9mZnNldDogMTc0MDgKICAg dHlwZTogZnJlZWJzZC1ib290CiAgIGluZGV4OiAxCiAgIGVuZDogMTYxCiAgIHN0YXJ0OiAzNAoy LiBOYW1lOiBkYTNwMgogICBNZWRpYXNpemU6IDM0MzU5NzM4MzY4ICgzMkcpCiAgIFNlY3RvcnNp emU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogODI5NDQKICAgTW9kZTog cjB3MGUwCiAgIHJhd3V1aWQ6IGNmNWQ0OWI3LWVlYzQtMTFlMC1hYWUzLTE0ZGFlOWM4YWY2Mgog ICByYXd0eXBlOiA1MTZlN2NiNS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6 IHN3YXAwCiAgIGxlbmd0aDogMzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA4Mjk0NAogICB0eXBlOiBm cmVlYnNkLXN3YXAKICAgaW5kZXg6IDIKICAgZW5kOiA2NzEwOTAyNQogICBzdGFydDogMTYyCjMu IE5hbWU6IGRhM3AzCiAgIE1lZGlhc2l6ZTogNzE1Nzk2NTM1ODA4ICg2NjZHKQogICBTZWN0b3Jz aXplOiA1MTIKICAgU3RyaXBlc2l6ZTogMAogICBTdHJpcGVvZmZzZXQ6IDgyOTQ0CiAgIE1vZGU6 IHIwdzBlMAogICByYXd1dWlkOiBlNWQ1NzdlZS1lZWM0LTExZTAtYWFlMy0xNGRhZTljOGFmNjIK ICAgcmF3dHlwZTogNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCiAgIGxhYmVs OiBkaXNrMAogICBsZW5ndGg6IDcxNTc5NjUzNTgwOAogICBvZmZzZXQ6IDM0MzU5ODIxMzEyCiAg IHR5cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAzCiAgIGVuZDogMTQ2NTE0OTEzNAogICBzdGFy dDogNjcxMDkwMjYKQ29uc3VtZXJzOgoxLiBOYW1lOiBkYTMKICAgTWVkaWFzaXplOiA3NTAxNTYz NzQwMTYgKDY5OEcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMHcwZTAKCg== --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="gpart-da4.out" Content-Description: gpart-da4.out Content-Disposition: attachment; filename="gpart-da4.out"; size=1246; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:28:07 GMT" Content-Transfer-Encoding: base64 R2VvbSBuYW1lOiBkYTQKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAyNTUKZndz ZWN0b3JzOiA2MwpsYXN0OiAxNDY1MTQ5MTM0CmZpcnN0OiAzNAplbnRyaWVzOiAxMjgKc2NoZW1l OiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBkYTRwMQogICBNZWRpYXNpemU6IDY1NTM2ICg2NGsp CiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogMTc0 MDgKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGJmMWFhMmFlLWVlYzQtMTFlMC1hYWUzLTE0 ZGFlOWM4YWY2MgogICByYXd0eXBlOiA4M2JkNmI5ZC03ZjQxLTExZGMtYmUwYi0wMDE1NjBiODRm MGYKICAgbGFiZWw6IChudWxsKQogICBsZW5ndGg6IDY1NTM2CiAgIG9mZnNldDogMTc0MDgKICAg dHlwZTogZnJlZWJzZC1ib290CiAgIGluZGV4OiAxCiAgIGVuZDogMTYxCiAgIHN0YXJ0OiAzNAoy LiBOYW1lOiBkYTRwMgogICBNZWRpYXNpemU6IDM0MzU5NzM4MzY4ICgzMkcpCiAgIFNlY3RvcnNp emU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogODI5NDQKICAgTW9kZTog cjB3MGUwCiAgIHJhd3V1aWQ6IGQ3MDVlMDhjLWVlYzQtMTFlMC1hYWUzLTE0ZGFlOWM4YWY2Mgog ICByYXd0eXBlOiA1MTZlN2NiNS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6 IHN3YXA0CiAgIGxlbmd0aDogMzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA4Mjk0NAogICB0eXBlOiBm cmVlYnNkLXN3YXAKICAgaW5kZXg6IDIKICAgZW5kOiA2NzEwOTAyNQogICBzdGFydDogMTYyCjMu IE5hbWU6IGRhNHAzCiAgIE1lZGlhc2l6ZTogNzE1Nzk2NTM1ODA4ICg2NjZHKQogICBTZWN0b3Jz aXplOiA1MTIKICAgU3RyaXBlc2l6ZTogMAogICBTdHJpcGVvZmZzZXQ6IDgyOTQ0CiAgIE1vZGU6 IHIwdzBlMAogICByYXd1dWlkOiBlY2Y3YzgwYi1lZWM0LTExZTAtYWFlMy0xNGRhZTljOGFmNjIK ICAgcmF3dHlwZTogNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCiAgIGxhYmVs OiBkaXNrNAogICBsZW5ndGg6IDcxNTc5NjUzNTgwOAogICBvZmZzZXQ6IDM0MzU5ODIxMzEyCiAg IHR5cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAzCiAgIGVuZDogMTQ2NTE0OTEzNAogICBzdGFy dDogNjcxMDkwMjYKQ29uc3VtZXJzOgoxLiBOYW1lOiBkYTQKICAgTWVkaWFzaXplOiA3NTAxNTYz NzQwMTYgKDY5OEcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMHcwZTAKCg== --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="gpart-da5.out" Content-Description: gpart-da5.out Content-Disposition: attachment; filename="gpart-da5.out"; size=1246; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:28:12 GMT" Content-Transfer-Encoding: base64 R2VvbSBuYW1lOiBkYTUKbW9kaWZpZWQ6IGZhbHNlCnN0YXRlOiBPSwpmd2hlYWRzOiAyNTUKZndz ZWN0b3JzOiA2MwpsYXN0OiAxNDY1MTQ5MTM0CmZpcnN0OiAzNAplbnRyaWVzOiAxMjgKc2NoZW1l OiBHUFQKUHJvdmlkZXJzOgoxLiBOYW1lOiBkYTVwMQogICBNZWRpYXNpemU6IDY1NTM2ICg2NGsp CiAgIFNlY3RvcnNpemU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogMTc0 MDgKICAgTW9kZTogcjB3MGUwCiAgIHJhd3V1aWQ6IGJkZTZmOGYzLWVlYzQtMTFlMC1hYWUzLTE0 ZGFlOWM4YWY2MgogICByYXd0eXBlOiA4M2JkNmI5ZC03ZjQxLTExZGMtYmUwYi0wMDE1NjBiODRm MGYKICAgbGFiZWw6IChudWxsKQogICBsZW5ndGg6IDY1NTM2CiAgIG9mZnNldDogMTc0MDgKICAg dHlwZTogZnJlZWJzZC1ib290CiAgIGluZGV4OiAxCiAgIGVuZDogMTYxCiAgIHN0YXJ0OiAzNAoy LiBOYW1lOiBkYTVwMgogICBNZWRpYXNpemU6IDM0MzU5NzM4MzY4ICgzMkcpCiAgIFNlY3RvcnNp emU6IDUxMgogICBTdHJpcGVzaXplOiAwCiAgIFN0cmlwZW9mZnNldDogODI5NDQKICAgTW9kZTog cjB3MGUwCiAgIHJhd3V1aWQ6IGQzODM1ZDZlLWVlYzQtMTFlMC1hYWUzLTE0ZGFlOWM4YWY2Mgog ICByYXd0eXBlOiA1MTZlN2NiNS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKICAgbGFiZWw6 IHN3YXAyCiAgIGxlbmd0aDogMzQzNTk3MzgzNjgKICAgb2Zmc2V0OiA4Mjk0NAogICB0eXBlOiBm cmVlYnNkLXN3YXAKICAgaW5kZXg6IDIKICAgZW5kOiA2NzEwOTAyNQogICBzdGFydDogMTYyCjMu IE5hbWU6IGRhNXAzCiAgIE1lZGlhc2l6ZTogNzE1Nzk2NTM1ODA4ICg2NjZHKQogICBTZWN0b3Jz aXplOiA1MTIKICAgU3RyaXBlc2l6ZTogMAogICBTdHJpcGVvZmZzZXQ6IDgyOTQ0CiAgIE1vZGU6 IHIwdzBlMAogICByYXd1dWlkOiBlYTFkYzAzMi1lZWM0LTExZTAtYWFlMy0xNGRhZTljOGFmNjIK ICAgcmF3dHlwZTogNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCiAgIGxhYmVs OiBkaXNrMgogICBsZW5ndGg6IDcxNTc5NjUzNTgwOAogICBvZmZzZXQ6IDM0MzU5ODIxMzEyCiAg IHR5cGU6IGZyZWVic2QtemZzCiAgIGluZGV4OiAzCiAgIGVuZDogMTQ2NTE0OTEzNAogICBzdGFy dDogNjcxMDkwMjYKQ29uc3VtZXJzOgoxLiBOYW1lOiBkYTUKICAgTWVkaWFzaXplOiA3NTAxNTYz NzQwMTYgKDY5OEcpCiAgIFNlY3RvcnNpemU6IDUxMgogICBNb2RlOiByMHcwZTAKCg== --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="loader.conf" Content-Description: loader.conf Content-Disposition: attachment; filename="loader.conf"; size=204; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:17:22 GMT" Content-Transfer-Encoding: base64 emZzX2xvYWQ9IllFUyIKdmZzLnJvb3QubW91bnRmcm9tPSJ6ZnM6enJvb3QiCnZmcy56ZnMucHJl ZmV0Y2hfZGlzYWJsZT0iMSIKdmZzLnpmcy50eGcudGltZW91dD0iNSIKbGludXhfbG9hZD0iWUVT IgpudmlkaWFfbG9hZD0iWUVTIgphdGFwaWNhbV9sb2FkPSJZRVMiCnZlc2FfbG9hZD0iWUVTIgph aW9fbG9hZD0iWUVTIgp2Ym94ZHJ2X2xvYWQ9IllFUyIK --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="sysctl.conf" Content-Description: sysctl.conf Content-Disposition: attachment; filename="sysctl.conf"; size=393; creation-date="Thu, 19 Jan 2012 15:35:00 GMT"; modification-date="Thu, 19 Jan 2012 15:17:39 GMT" Content-Transfer-Encoding: base64 IyAkRnJlZUJTRDogc3JjL2V0Yy9zeXNjdGwuY29uZix2IDEuOC40MC4xIDIwMTEvMDkvMjMgMDA6 NTE6Mzcga2Vuc21pdGggRXhwICQKIwojICBUaGlzIGZpbGUgaXMgcmVhZCB3aGVuIGdvaW5nIHRv IG11bHRpLXVzZXIgYW5kIGl0cyBjb250ZW50cyBwaXBlZCB0aHJ1CiMgIGBgc3lzY3RsJycgdG8g YWRqdXN0IGtlcm5lbCB2YWx1ZXMuICBgYG1hbiA1IHN5c2N0bC5jb25mJycgZm9yIGRldGFpbHMu CiMKCiMgVW5jb21tZW50IHRoaXMgdG8gcHJldmVudCB1c2VycyBmcm9tIHNlZWluZyBpbmZvcm1h dGlvbiBhYm91dCBwcm9jZXNzZXMgdGhhdAojIGFyZSBiZWluZyBydW4gdW5kZXIgYW5vdGhlciBV SUQuCiNzZWN1cml0eS5ic2Quc2VlX290aGVyX3VpZHM9MAp2ZnMudXNlcm1vdW50PTEK --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_ Content-Type: application/octet-stream; name="sysctl-zfs.out" Content-Description: sysctl-zfs.out Content-Disposition: attachment; filename="sysctl-zfs.out"; size=7546; creation-date="Thu, 19 Jan 2012 15:35:01 GMT"; modification-date="Thu, 19 Jan 2012 15:25:50 GMT" Content-Transfer-Encoding: base64 MSBQQVJUIGRhNXAzIDcxNTc5NjUzNTgwOCA1MTIgaSAzIG8gMzQzNTk4MjEzMTIgdHkgZnJlZWJz ZC16ZnMgeHMgR1BUIHh0IDUxNmU3Y2JhLTZlY2YtMTFkNi04ZmY4LTAwMDIyZDA5NzEyYgoxIFBB UlQgZGE0cDMgNzE1Nzk2NTM1ODA4IDUxMiBpIDMgbyAzNDM1OTgyMTMxMiB0eSBmcmVlYnNkLXpm cyB4cyBHUFQgeHQgNTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCjEgUEFSVCBk YTNwMyA3MTU3OTY1MzU4MDggNTEyIGkgMyBvIDM0MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhz IEdQVCB4dCA1MTZlN2NiYS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKMSBQQVJUIGRhMnAz IDcxNTc5NjUzNTgwOCA1MTIgaSAzIG8gMzQzNTk4MjEzMTIgdHkgZnJlZWJzZC16ZnMgeHMgR1BU IHh0IDUxNmU3Y2JhLTZlY2YtMTFkNi04ZmY4LTAwMDIyZDA5NzEyYgoxIFBBUlQgZGExcDMgNzE1 Nzk2NTM1ODA4IDUxMiBpIDMgbyAzNDM1OTgyMTMxMiB0eSBmcmVlYnNkLXpmcyB4cyBHUFQgeHQg NTE2ZTdjYmEtNmVjZi0xMWQ2LThmZjgtMDAwMjJkMDk3MTJiCjEgUEFSVCBkYTBwMyA3MTU3OTY1 MzU4MDggNTEyIGkgMyBvIDM0MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhzIEdQVCB4dCA1MTZl N2NiYS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKMSBQQVJUIGFkYTVwMyA3MTU3OTY1MzU4 MDggNTEyIGkgMyBvIDM0MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhzIEdQVCB4dCA1MTZlN2Ni YS02ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKMSBQQVJUIGFkYTRwMyA3MTU3OTY1MzU4MDgg NTEyIGkgMyBvIDM0MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhzIEdQVCB4dCA1MTZlN2NiYS02 ZWNmLTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKMSBQQVJUIGFkYTNwMyA3MTU3OTY1MzU4MDggNTEy IGkgMyBvIDM0MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhzIEdQVCB4dCA1MTZlN2NiYS02ZWNm LTExZDYtOGZmOC0wMDAyMmQwOTcxMmIKMSBQQVJUIGFkYTJwMyA3MTU3OTY1MzU4MDggNTEyIGkg MyBvIDM0MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhzIEdQVCB4dCA1MTZlN2NiYS02ZWNmLTEx ZDYtOGZmOC0wMDAyMmQwOTcxMmIKMSBQQVJUIGFkYTFwMyA3MTU3OTY1MzU4MDggNTEyIGkgMyBv IDM0MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhzIEdQVCB4dCA1MTZlN2NiYS02ZWNmLTExZDYt OGZmOC0wMDAyMmQwOTcxMmIKMSBQQVJUIGFkYTBwMyA3MTU3OTY1MzU4MDggNTEyIGkgMyBvIDM0 MzU5ODIxMzEyIHR5IGZyZWVic2QtemZzIHhzIEdQVCB4dCA1MTZlN2NiYS02ZWNmLTExZDYtOGZm OC0wMDAyMmQwOTcxMmIKejB4ZmZmZmZlMDAwZTZiOWEwMCBbc2hhcGU9Ym94LGxhYmVsPSJaRlM6 OlZERVZcbnpmczo6dmRldlxuciM0Il07CiAgICAgIDxuYW1lPnpmczo6dmRldjwvbmFtZT4KCSAg ICA8dHlwZT5mcmVlYnNkLXpmczwvdHlwZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpmczwvdHlwZT4K CSAgICA8dHlwZT5mcmVlYnNkLXpmczwvdHlwZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpmczwvdHlw ZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpmczwvdHlwZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpmczwv dHlwZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpmczwvdHlwZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpm czwvdHlwZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpmczwvdHlwZT4KCSAgICA8dHlwZT5mcmVlYnNk LXpmczwvdHlwZT4KCSAgICA8dHlwZT5mcmVlYnNkLXpmczwvdHlwZT4KCSAgICA8dHlwZT5mcmVl YnNkLXpmczwvdHlwZT4KdmZzLnpmcy5sMmNfb25seV9zaXplOiAwCnZmcy56ZnMubWZ1X2dob3N0 X2RhdGFfbHNpemU6IDUxMgp2ZnMuemZzLm1mdV9naG9zdF9tZXRhZGF0YV9sc2l6ZTogMAp2ZnMu emZzLm1mdV9naG9zdF9zaXplOiA1MTIKdmZzLnpmcy5tZnVfZGF0YV9sc2l6ZTogNjc4MDc3NDQK dmZzLnpmcy5tZnVfbWV0YWRhdGFfbHNpemU6IDIwMzE2MTYKdmZzLnpmcy5tZnVfc2l6ZTogNzAx MjQwMzIKdmZzLnpmcy5tcnVfZ2hvc3RfZGF0YV9sc2l6ZTogNDY3NDU2CnZmcy56ZnMubXJ1X2do b3N0X21ldGFkYXRhX2xzaXplOiAxMDAzNTIKdmZzLnpmcy5tcnVfZ2hvc3Rfc2l6ZTogNTY3ODA4 CnZmcy56ZnMubXJ1X2RhdGFfbHNpemU6IDg4MjIwNjcyCnZmcy56ZnMubXJ1X21ldGFkYXRhX2xz aXplOiA3NDA3NjE2CnZmcy56ZnMubXJ1X3NpemU6IDExMzI5NDMzNgp2ZnMuemZzLmFub25fZGF0 YV9sc2l6ZTogMAp2ZnMuemZzLmFub25fbWV0YWRhdGFfbHNpemU6IDAKdmZzLnpmcy5hbm9uX3Np emU6IDE2MDMwNzIKdmZzLnpmcy5sMmFyY19ub3J3OiAxCnZmcy56ZnMubDJhcmNfZmVlZF9hZ2Fp bjogMQp2ZnMuemZzLmwyYXJjX25vcHJlZmV0Y2g6IDEKdmZzLnpmcy5sMmFyY19mZWVkX21pbl9t czogMjAwCnZmcy56ZnMubDJhcmNfZmVlZF9zZWNzOiAxCnZmcy56ZnMubDJhcmNfaGVhZHJvb206 IDIKdmZzLnpmcy5sMmFyY193cml0ZV9ib29zdDogODM4ODYwOAp2ZnMuemZzLmwyYXJjX3dyaXRl X21heDogODM4ODYwOAp2ZnMuemZzLmFyY19tZXRhX2xpbWl0OiAzODc0MzE0MjQwCnZmcy56ZnMu YXJjX21ldGFfdXNlZDogMzYzOTQxNzYKdmZzLnpmcy5hcmNfbWluOiAxOTM3MTU3MTIwCnZmcy56 ZnMuYXJjX21heDogMTU0OTcyNTY5NjAKdmZzLnpmcy5kZWR1cC5wcmVmZXRjaDogMQp2ZnMuemZz Lm1kY29tcF9kaXNhYmxlOiAwCnZmcy56ZnMud3JpdGVfbGltaXRfb3ZlcnJpZGU6IDAKdmZzLnpm cy53cml0ZV9saW1pdF9pbmZsYXRlZDogNTEyOTgwMDA4OTYKdmZzLnpmcy53cml0ZV9saW1pdF9t YXg6IDIxMzc0MTY3MDQKdmZzLnpmcy53cml0ZV9saW1pdF9taW46IDMzNTU0NDMyCnZmcy56ZnMu d3JpdGVfbGltaXRfc2hpZnQ6IDMKdmZzLnpmcy5ub193cml0ZV90aHJvdHRsZTogMAp2ZnMuemZz LnpmZXRjaC5hcnJheV9yZF9zejogMTA0ODU3Ngp2ZnMuemZzLnpmZXRjaC5ibG9ja19jYXA6IDI1 Ngp2ZnMuemZzLnpmZXRjaC5taW5fc2VjX3JlYXA6IDIKdmZzLnpmcy56ZmV0Y2gubWF4X3N0cmVh bXM6IDgKdmZzLnpmcy5wcmVmZXRjaF9kaXNhYmxlOiAxCnZmcy56ZnMubWdfYWxsb2NfZmFpbHVy ZXM6IDkKdmZzLnpmcy5jaGVja19ob3N0aWQ6IDEKdmZzLnpmcy5yZWNvdmVyOiAwCnZmcy56ZnMu dHhnLnN5bmN0aW1lX21zOiAxMDAwCnZmcy56ZnMudHhnLnRpbWVvdXQ6IDUKdmZzLnpmcy5zY3J1 Yl9saW1pdDogMTAKdmZzLnpmcy52ZGV2LmNhY2hlLmJzaGlmdDogMTYKdmZzLnpmcy52ZGV2LmNh Y2hlLnNpemU6IDAKdmZzLnpmcy52ZGV2LmNhY2hlLm1heDogMTYzODQKdmZzLnpmcy52ZGV2Lndy aXRlX2dhcF9saW1pdDogNDA5Ngp2ZnMuemZzLnZkZXYucmVhZF9nYXBfbGltaXQ6IDMyNzY4CnZm cy56ZnMudmRldi5hZ2dyZWdhdGlvbl9saW1pdDogMTMxMDcyCnZmcy56ZnMudmRldi5yYW1wX3Jh dGU6IDIKdmZzLnpmcy52ZGV2LnRpbWVfc2hpZnQ6IDYKdmZzLnpmcy52ZGV2Lm1pbl9wZW5kaW5n OiA0CnZmcy56ZnMudmRldi5tYXhfcGVuZGluZzogMTAKdmZzLnpmcy52ZGV2LmJpb19mbHVzaF9k aXNhYmxlOiAwCnZmcy56ZnMuY2FjaGVfZmx1c2hfZGlzYWJsZTogMAp2ZnMuemZzLnppbF9yZXBs YXlfZGlzYWJsZTogMAp2ZnMuemZzLnppby51c2VfdW1hOiAwCnZmcy56ZnMudmVyc2lvbi56cGw6 IDUKdmZzLnpmcy52ZXJzaW9uLnNwYTogMjgKdmZzLnpmcy52ZXJzaW9uLmFjbDogMQp2ZnMuemZz LmRlYnVnOiAwCnZmcy56ZnMuc3VwZXJfb3duZXI6IDAKa3N0YXQuemZzLm1pc2MueHVpb19zdGF0 cy5vbmxvYW5fcmVhZF9idWY6IDAKa3N0YXQuemZzLm1pc2MueHVpb19zdGF0cy5vbmxvYW5fd3Jp dGVfYnVmOiAwCmtzdGF0Lnpmcy5taXNjLnh1aW9fc3RhdHMucmVhZF9idWZfY29waWVkOiAwCmtz dGF0Lnpmcy5taXNjLnh1aW9fc3RhdHMucmVhZF9idWZfbm9jb3B5OiAwCmtzdGF0Lnpmcy5taXNj Lnh1aW9fc3RhdHMud3JpdGVfYnVmX2NvcGllZDogMAprc3RhdC56ZnMubWlzYy54dWlvX3N0YXRz LndyaXRlX2J1Zl9ub2NvcHk6IDAKa3N0YXQuemZzLm1pc2MuemZldGNoc3RhdHMuaGl0czogMApr c3RhdC56ZnMubWlzYy56ZmV0Y2hzdGF0cy5taXNzZXM6IDAKa3N0YXQuemZzLm1pc2MuemZldGNo c3RhdHMuY29saW5lYXJfaGl0czogMAprc3RhdC56ZnMubWlzYy56ZmV0Y2hzdGF0cy5jb2xpbmVh cl9taXNzZXM6IDAKa3N0YXQuemZzLm1pc2MuemZldGNoc3RhdHMuc3RyaWRlX2hpdHM6IDAKa3N0 YXQuemZzLm1pc2MuemZldGNoc3RhdHMuc3RyaWRlX21pc3NlczogMAprc3RhdC56ZnMubWlzYy56 ZmV0Y2hzdGF0cy5yZWNsYWltX3N1Y2Nlc3NlczogMAprc3RhdC56ZnMubWlzYy56ZmV0Y2hzdGF0 cy5yZWNsYWltX2ZhaWx1cmVzOiAwCmtzdGF0Lnpmcy5taXNjLnpmZXRjaHN0YXRzLnN0cmVhbXNf cmVzZXRzOiAwCmtzdGF0Lnpmcy5taXNjLnpmZXRjaHN0YXRzLnN0cmVhbXNfbm9yZXNldHM6IDAK a3N0YXQuemZzLm1pc2MuemZldGNoc3RhdHMuYm9ndXNfc3RyZWFtczogMAprc3RhdC56ZnMubWlz Yy5hcmNzdGF0cy5oaXRzOiA4OTQyNQprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5taXNzZXM6IDQ0 NDQKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMuZGVtYW5kX2RhdGFfaGl0czogNDI2NDAKa3N0YXQu emZzLm1pc2MuYXJjc3RhdHMuZGVtYW5kX2RhdGFfbWlzc2VzOiAyMjAxCmtzdGF0Lnpmcy5taXNj LmFyY3N0YXRzLmRlbWFuZF9tZXRhZGF0YV9oaXRzOiA0NjcyNQprc3RhdC56ZnMubWlzYy5hcmNz dGF0cy5kZW1hbmRfbWV0YWRhdGFfbWlzc2VzOiAyMjIxCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRz LnByZWZldGNoX2RhdGFfaGl0czogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5wcmVmZXRjaF9k YXRhX21pc3NlczogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5wcmVmZXRjaF9tZXRhZGF0YV9o aXRzOiA2MAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5wcmVmZXRjaF9tZXRhZGF0YV9taXNzZXM6 IDIyCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLm1ydV9oaXRzOiAyNjcwNgprc3RhdC56ZnMubWlz Yy5hcmNzdGF0cy5tcnVfZ2hvc3RfaGl0czogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5tZnVf aGl0czogNjI2NjgKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubWZ1X2dob3N0X2hpdHM6IDAKa3N0 YXQuemZzLm1pc2MuYXJjc3RhdHMuYWxsb2NhdGVkOiAxNTk2OAprc3RhdC56ZnMubWlzYy5hcmNz dGF0cy5kZWxldGVkOiAxMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5zdG9sZW46IDAKa3N0YXQu emZzLm1pc2MuYXJjc3RhdHMucmVjeWNsZV9taXNzOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRz Lm11dGV4X21pc3M6IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMuZXZpY3Rfc2tpcDogMAprc3Rh dC56ZnMubWlzYy5hcmNzdGF0cy5ldmljdF9sMl9jYWNoZWQ6IDAKa3N0YXQuemZzLm1pc2MuYXJj c3RhdHMuZXZpY3RfbDJfZWxpZ2libGU6IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMuZXZpY3Rf bDJfaW5lbGlnaWJsZTogMjA0OAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5oYXNoX2VsZW1lbnRz OiA0NTc3CmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmhhc2hfZWxlbWVudHNfbWF4OiA0NTc5Cmtz dGF0Lnpmcy5taXNjLmFyY3N0YXRzLmhhc2hfY29sbGlzaW9uczogNzIKa3N0YXQuemZzLm1pc2Mu YXJjc3RhdHMuaGFzaF9jaGFpbnM6IDMyCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmhhc2hfY2hh aW5fbWF4OiAxCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLnA6IDc3NDg2Mjg0ODAKa3N0YXQuemZz Lm1pc2MuYXJjc3RhdHMuYzogMTU0OTcyNTY5NjAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMuY19t aW46IDE5MzcxNTcxMjAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMuY19tYXg6IDE1NDk3MjU2OTYw CmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLnNpemU6IDE5MjQzNTkwNAprc3RhdC56ZnMubWlzYy5h cmNzdGF0cy5oZHJfc2l6ZTogMTQ2Mjc4NAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5kYXRhX3Np emU6IDE4NTAyMTQ0MAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5vdGhlcl9zaXplOiA1OTUxNjgw CmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX2hpdHM6IDAKa3N0YXQuemZzLm1pc2MuYXJjc3Rh dHMubDJfbWlzc2VzOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX2ZlZWRzOiAwCmtzdGF0 Lnpmcy5taXNjLmFyY3N0YXRzLmwyX3J3X2NsYXNoOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRz LmwyX3JlYWRfYnl0ZXM6IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubDJfd3JpdGVfYnl0ZXM6 IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubDJfd3JpdGVzX3NlbnQ6IDAKa3N0YXQuemZzLm1p c2MuYXJjc3RhdHMubDJfd3JpdGVzX2RvbmU6IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubDJf d3JpdGVzX2Vycm9yOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX3dyaXRlc19oZHJfbWlz czogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5sMl9ldmljdF9sb2NrX3JldHJ5OiAwCmtzdGF0 Lnpmcy5taXNjLmFyY3N0YXRzLmwyX2V2aWN0X3JlYWRpbmc6IDAKa3N0YXQuemZzLm1pc2MuYXJj c3RhdHMubDJfZnJlZV9vbl93cml0ZTogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5sMl9hYm9y dF9sb3dtZW06IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubDJfY2tzdW1fYmFkOiAwCmtzdGF0 Lnpmcy5taXNjLmFyY3N0YXRzLmwyX2lvX2Vycm9yOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRz LmwyX3NpemU6IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubDJfaGRyX3NpemU6IDAKa3N0YXQu emZzLm1pc2MuYXJjc3RhdHMubWVtb3J5X3Rocm90dGxlX2NvdW50OiAwCmtzdGF0Lnpmcy5taXNj LmFyY3N0YXRzLmwyX3dyaXRlX3RyeWxvY2tfZmFpbDogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0 cy5sMl93cml0ZV9wYXNzZWRfaGVhZHJvb206IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubDJf d3JpdGVfc3BhX21pc21hdGNoOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX3dyaXRlX2lu X2wyOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX3dyaXRlX2lvX2luX3Byb2dyZXNzOiAw CmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX3dyaXRlX25vdF9jYWNoZWFibGU6IDEKa3N0YXQu emZzLm1pc2MuYXJjc3RhdHMubDJfd3JpdGVfZnVsbDogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0 cy5sMl93cml0ZV9idWZmZXJfaXRlcjogMAprc3RhdC56ZnMubWlzYy5hcmNzdGF0cy5sMl93cml0 ZV9waW9zOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX3dyaXRlX2J1ZmZlcl9ieXRlc19z Y2FubmVkOiAwCmtzdGF0Lnpmcy5taXNjLmFyY3N0YXRzLmwyX3dyaXRlX2J1ZmZlcl9saXN0X2l0 ZXI6IDAKa3N0YXQuemZzLm1pc2MuYXJjc3RhdHMubDJfd3JpdGVfYnVmZmVyX2xpc3RfbnVsbF9p dGVyOiAwCmtzdGF0Lnpmcy5taXNjLnZkZXZfY2FjaGVfc3RhdHMuZGVsZWdhdGlvbnM6IDAKa3N0 YXQuemZzLm1pc2MudmRldl9jYWNoZV9zdGF0cy5oaXRzOiAwCmtzdGF0Lnpmcy5taXNjLnZkZXZf Y2FjaGVfc3RhdHMubWlzc2VzOiAwCg== --_011_39C592E81AEC0B418EAD826FC1BBB09B25031Dmailgate_-- From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 15:50:48 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 41055106567A; Thu, 19 Jan 2012 15:50:48 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 129288FC08; Thu, 19 Jan 2012 15:50:48 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [96.47.65.170]) by cyrus.watson.org (Postfix) with ESMTPSA id BF44C46B0D; Thu, 19 Jan 2012 10:50:47 -0500 (EST) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 1DF82B99C; Thu, 19 Jan 2012 10:50:47 -0500 (EST) From: John Baldwin To: Kostik Belousov Date: Thu, 19 Jan 2012 10:26:09 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p10; KDE/4.5.5; amd64; ; ) References: <201201181707.21293.jhb@freebsd.org> <20120119140613.GD31224@deviant.kiev.zoral.com.ua> In-Reply-To: <20120119140613.GD31224@deviant.kiev.zoral.com.ua> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201201191026.09431.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Thu, 19 Jan 2012 10:50:47 -0500 (EST) Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 15:50:48 -0000 On Thursday, January 19, 2012 9:06:13 am Kostik Belousov wrote: > On Wed, Jan 18, 2012 at 05:07:21PM -0500, John Baldwin wrote: > ... > > What I concluded is that it would really be far simpler and more > > obvious if the cached timestamps were stored in the namecache entry > > directly rather than having multiple name cache entries validated by > > shared state in the nfsnode. This does mean allowing the name cache > > to hold some filesystem-specific state. However, I felt this was much > > cleaner than adding a lot more complexity to nfs_lookup(). Also, this > > turns out to be fairly non-invasive to implement since nfs_lookup() > > calls cache_lookup() directly, but other filesystems only call it > > indirectly via vfs_cache_lookup(). I considered letting filesystems > > store a void * cookie in the name cache entry and having them provide > > a destructor, etc. However, that would require extra allocations for > > NFS lookups. Instead, I just adjusted the name cache API to > > explicitly allow the filesystem to store a single timestamp in a name > > cache entry by adding a new 'cache_enter_time()' that accepts a struct > > timespec that is copied into the entry. 'cache_enter_time()' also > > saves the current value of 'ticks' in the entry. 'cache_lookup()' is > > modified to add two new arguments used to return the timespec and > > ticks value used for a namecache entry when a hit in the cache occurs. > > > > One wrinkle with this is that the name cache does not create actual > > entries for ".", and thus it would not store any timestamps for those > > lookups. To fix this I changed the NFS client to explicitly fast-path > > lookups of "." by always returning the current directory as setup by > > cache_lookup() and never bothering to do a LOOKUP or check for stale > > attributes in that case. > > > > The current patch against 8 is at > > http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch > ... > > So now you add 8*2+4 bytes to each namecache entry on amd64 unconditionally. > Current size of the struct namecache invariant part on amd64 is 72 bytes, > so addition of 20 bytes looks slightly excessive. I am not sure about > typical distribution of the namecache nc_name length, so it is unobvious > does the change changes the memory usage significantly. > > A flag could be added to nc_flags to indicate the presence of timestamp. > The timestamps would be conditionally placed after nc_nlen, we probably > could use union to ease the access. Then, the direct dereferences of > nc_name would need to be converted to some inline function. > > I can do this after your patch is committed, if you consider the memory > usage saving worth it. Hmm, if the memory usage really is worrying then I could move to using the void * cookie method instead. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 15:50:49 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 221321065672; Thu, 19 Jan 2012 15:50:49 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id D54EB8FC14; Thu, 19 Jan 2012 15:50:48 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [96.47.65.170]) by cyrus.watson.org (Postfix) with ESMTPSA id 86B1246B09; Thu, 19 Jan 2012 10:50:48 -0500 (EST) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id E336DB9A0; Thu, 19 Jan 2012 10:50:47 -0500 (EST) From: John Baldwin To: Rick Macklem Date: Thu, 19 Jan 2012 10:27:29 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p10; KDE/4.5.5; amd64; ; ) References: <1143916684.516944.1326930777192.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <1143916684.516944.1326930777192.JavaMail.root@erie.cs.uoguelph.ca> MIME-Version: 1.0 Content-Type: Text/Plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <201201191027.29261.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Thu, 19 Jan 2012 10:50:48 -0500 (EST) Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 15:50:49 -0000 On Wednesday, January 18, 2012 6:52:57 pm Rick Macklem wrote: > John Baldwin wrote: > > I recently encountered a problem at work with a very stale name cache > > entry. A directory was renamed on one NFS client and a new directory > > was created with the same name. On another NFS client, both the old > > and new pathnames resolved to the old filehandle and stayed that way > > for days. It was only fixed by touching the parent directory which > > forced the "wrong" NFS client to flush name cache entries for the > > directory and repopulate it via LOOKUPs. I eventually figured out the > > race condition that triggered this and was able to reproduce it. (I > > had to hack up the NFS client to do some explicit sleeps to order the > > steps right to trigger the race however. It seems to be very rare in > > practice.) The root cause for the stale entry being trusted is that > > each per-vnode nfsnode structure has a single 'n_ctime' timestamp used > > to validate positive name cache entries. However, if there are > > multiple entries for a single vnode, they were all sharing a single > > timestamp. Assume you have three threads spread across two NFS > > clients (R1 on the client doing the directory rename, and T1 and T2 on > > the "victim" NFS client), and assume that thread S1 represents the NFS > > server and the order it completes requests. Also, assume that $D > > represents a parent directory where the rename occurs and that the > > original directory is named "foo". Finally, assume that F1 is the > > original directory's filehandle, and F2 is the new filehandle. > > Time increases as the graph goes down: > > > > R1 T1 T2 S1 > > ------------- ------------- ------------- --------------- > > LOOKUP "$D/foo" > > (1) > > REPLY (1) "foo" F1 > > start reply > > processing > > up to > > extracting > > post-op attrs > > RENAME "$D/foo" > > "$D/baz" (2) > > REPLY (2) > > GETATTR $D > > during lookup > > due to expiry > > (3) > > REPLY (3) > > flush $D name > > cache entries > > due to updated > > timestamp > > LOOKUP "$D/baz" > > (4) > > REPLY (4) "baz" F1 > > process reply, > > including > > post-op attrs > > that set F1's > > cached attrs > > to a ctime > > post RENAME > > > > resume reply finish reply > > processing processing > > including including > > setting F1's setting F1's > > n_ctime and n_ctime and > > adding cache adding cache > > entry entry > > > > At the end of this, the "victim" NFS client now has two name cache > > entries for "$D/foo" and "$D/baz" that point to the F1 filehandle. > > The n_ctime used to validate these name cache hits in nfs_lookup() is > > already updated to post RENAME, so nfs_lookup() will trust these > > entries until a future change to F1's i-node. Further, "$D"'s local > > attribute cache already reflects the updated ctime post RENAME, so it > > will not flush it's name cache entries until a future change to the > > directory. > > > > The root problem is that the name cache entry for "foo" was added > > using the wrong ctime. It really should be using the F1 attributes in > > the post-op attributes from the LOOKUP reply, not from F1's local > > attribute cache. However, just changing that is not sufficient. > > There are still races with the calls to cache_enter() and updating > > n_ctime. > > > > What I concluded is that it would really be far simpler and more > > obvious if the cached timestamps were stored in the namecache entry > > directly rather than having multiple name cache entries validated by > > shared state in the nfsnode. This does mean allowing the name cache > > to hold some filesystem-specific state. However, I felt this was much > > cleaner than adding a lot more complexity to nfs_lookup(). Also, this > > turns out to be fairly non-invasive to implement since nfs_lookup() > > calls cache_lookup() directly, but other filesystems only call it > > indirectly via vfs_cache_lookup(). I considered letting filesystems > > store a void * cookie in the name cache entry and having them provide > > a destructor, etc. However, that would require extra allocations for > > NFS lookups. Instead, I just adjusted the name cache API to > > explicitly allow the filesystem to store a single timestamp in a name > > cache entry by adding a new 'cache_enter_time()' that accepts a struct > > timespec that is copied into the entry. 'cache_enter_time()' also > > saves the current value of 'ticks' in the entry. 'cache_lookup()' is > > modified to add two new arguments used to return the timespec and > > ticks value used for a namecache entry when a hit in the cache occurs. > > > > One wrinkle with this is that the name cache does not create actual > > entries for ".", and thus it would not store any timestamps for those > > lookups. To fix this I changed the NFS client to explicitly fast-path > > lookups of "." by always returning the current directory as setup by > > cache_lookup() and never bothering to do a LOOKUP or check for stale > > attributes in that case. > > > > The current patch against 8 is at > > http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch > > > > It includes ABI and API compat shims so that it is suitable for > > merging to stable branches. For HEAD I would likely retire the > > cache_lookup_times() name and just change all the callers of > > cache_lookup() (there are only a handful, and nwfs and smbfs might > > benefit from this functionality anyway). > > > It sounds good to me, although I haven`t yet looked at the patch > or thought about it much. > > However, (and I think you`re already aware of this) given time clock > resolution etc, as soon as multiple clients start manipulating the > contents of a directory concurrently there is going to be a possibility > of having a stale name cache entry. I think you`ve already mentioned this, > but having a timeout on positive name cache entries like we did for > negative name cache entries, will at least limit the effect of these. > > For negative name cache entries, the little test I did showed that name > cache hit was almost as good for a 30-60sec timeout as an infinite timeout. > I suspect something similar might be true for positive name cache entries > and it will be easy to do some measurements once it is coded. > > If you would like, I can code up a positive name cache timeout similar > to what you did for the negative name cache entries or would you prefer > to do so? I already have a patch for that (though it is not relevant to this change). It will be easy to update it though once this change is made (actually, it will be simpler since it can re-use ncticks rather than adding a new n_ctime_ticks field to the nfsnode). I had done it by adding a new 'nametimeo' mount option that defaulted to 60 seconds. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 16:02:04 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A2636106566B; Thu, 19 Jan 2012 16:02:04 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id 391B38FC18; Thu, 19 Jan 2012 16:02:03 +0000 (UTC) Received: from skuns.kiev.zoral.com.ua (localhost [127.0.0.1]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id q0JG1uuF040817 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 19 Jan 2012 18:01:57 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5) with ESMTP id q0JG1ug7078703; Thu, 19 Jan 2012 18:01:56 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5/Submit) id q0JG1u5r078702; Thu, 19 Jan 2012 18:01:56 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Thu, 19 Jan 2012 18:01:56 +0200 From: Kostik Belousov To: John Baldwin Message-ID: <20120119160156.GF31224@deviant.kiev.zoral.com.ua> References: <201201181707.21293.jhb@freebsd.org> <20120119140613.GD31224@deviant.kiev.zoral.com.ua> <201201191026.09431.jhb@freebsd.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="3CtsHjCpq0rLy5Nm" Content-Disposition: inline In-Reply-To: <201201191026.09431.jhb@freebsd.org> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 16:02:04 -0000 --3CtsHjCpq0rLy5Nm Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Jan 19, 2012 at 10:26:09AM -0500, John Baldwin wrote: > On Thursday, January 19, 2012 9:06:13 am Kostik Belousov wrote: > > On Wed, Jan 18, 2012 at 05:07:21PM -0500, John Baldwin wrote: > > ... > > > What I concluded is that it would really be far simpler and more > > > obvious if the cached timestamps were stored in the namecache entry > > > directly rather than having multiple name cache entries validated by > > > shared state in the nfsnode. This does mean allowing the name cache > > > to hold some filesystem-specific state. However, I felt this was much > > > cleaner than adding a lot more complexity to nfs_lookup(). Also, this > > > turns out to be fairly non-invasive to implement since nfs_lookup() > > > calls cache_lookup() directly, but other filesystems only call it > > > indirectly via vfs_cache_lookup(). I considered letting filesystems > > > store a void * cookie in the name cache entry and having them provide > > > a destructor, etc. However, that would require extra allocations for > > > NFS lookups. Instead, I just adjusted the name cache API to > > > explicitly allow the filesystem to store a single timestamp in a name > > > cache entry by adding a new 'cache_enter_time()' that accepts a struct > > > timespec that is copied into the entry. 'cache_enter_time()' also > > > saves the current value of 'ticks' in the entry. 'cache_lookup()' is > > > modified to add two new arguments used to return the timespec and > > > ticks value used for a namecache entry when a hit in the cache occurs. > > >=20 > > > One wrinkle with this is that the name cache does not create actual > > > entries for ".", and thus it would not store any timestamps for those > > > lookups. To fix this I changed the NFS client to explicitly fast-path > > > lookups of "." by always returning the current directory as setup by > > > cache_lookup() and never bothering to do a LOOKUP or check for stale > > > attributes in that case. > > >=20 > > > The current patch against 8 is at > > > http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch > > ... > >=20 > > So now you add 8*2+4 bytes to each namecache entry on amd64 uncondition= ally. > > Current size of the struct namecache invariant part on amd64 is 72 byte= s, > > so addition of 20 bytes looks slightly excessive. I am not sure about > > typical distribution of the namecache nc_name length, so it is unobvious > > does the change changes the memory usage significantly. > >=20 > > A flag could be added to nc_flags to indicate the presence of timestamp. > > The timestamps would be conditionally placed after nc_nlen, we probably > > could use union to ease the access. Then, the direct dereferences of > > nc_name would need to be converted to some inline function. > >=20 > > I can do this after your patch is committed, if you consider the memory > > usage saving worth it. >=20 > Hmm, if the memory usage really is worrying then I could move to using the > void * cookie method instead. I think the current approach is better then cookie that again will be used only for NFS. With the cookie, you still has 8 bytes for each ncp. With union, you do not have the overhead for !NFS. Default setup allows for ~300000 vnodes on not too powerful amd64 machine, the ncsizefactor 2 together with 8 bytes for cookie is 4.5MB. For 20 bytes per ncp, we get 12MB overhead. --3CtsHjCpq0rLy5Nm Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEARECAAYFAk8YPnQACgkQC3+MBN1Mb4ipHwCeORnmBgA4rozRlEEWBgAErGj7 gWgAoJiA9rkUITvywvz3H+EyxYHH04ga =riod -----END PGP SIGNATURE----- --3CtsHjCpq0rLy5Nm-- From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 16:02:29 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E719E1065672 for ; Thu, 19 Jan 2012 16:02:29 +0000 (UTC) (envelope-from c.kworr@gmail.com) Received: from mail-we0-f182.google.com (mail-we0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 7870C8FC22 for ; Thu, 19 Jan 2012 16:02:29 +0000 (UTC) Received: by werg1 with SMTP id g1so97899wer.13 for ; Thu, 19 Jan 2012 08:02:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=t/WcqguswRSWk8D9VCtUjdT45BG3HtnxRR/gi+MD57g=; b=oz2L1osPbq2Jw6ixd8DhuAK1aSE5yEOxOazW1llJQXAPq38GQZz8VHj2DemxqWy+7m reOyvEEkXvRJVUSUQ1M7NqC3cFwoNy8/3iCxBTcdgidRtCOpHfXQerZxZCU7momFgIrg Gv/ln774K+GMjX6EHOTrO4QW6L7tCa57/2k2g= Received: by 10.216.138.219 with SMTP id a69mr836132wej.6.1326988948422; Thu, 19 Jan 2012 08:02:28 -0800 (PST) Received: from green.tandem.local (236-146-201-46.pool.ukrtel.net. [46.201.146.236]) by mx.google.com with ESMTPS id bu13sm31936824wib.6.2012.01.19.08.02.27 (version=SSLv3 cipher=OTHER); Thu, 19 Jan 2012 08:02:27 -0800 (PST) Message-ID: <4F183E92.9060300@gmail.com> Date: Thu, 19 Jan 2012 18:02:26 +0200 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0.1) Gecko/20120110 Firefox/9.0.1 SeaMonkey/2.6.1 MIME-Version: 1.0 To: Martin Ranne References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> In-Reply-To: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" Subject: Re: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 16:02:30 -0000 Martin Ranne wrote: > I had a failure in one server where i try to determine if it is memory or cpu. It shows up as memory failure in memtest86. The result is that it managed to damage the zpool which is a raidz2 with 6 disks. > > If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool -f -R /mnt/zroot zroot it will reboot the computer. I have also tried to import it in another computer which is running 9-STABLE with the same result. On the second computer I used zpool -f -R /mnt/zroot "zpool-id" serv06zroot > > Can I get some help on how to be able to debug this and in the end be able to import it to repair it. > > Data for the second computer can be found attached. The disks in question are da0 to da5 in this. Try importing pool in read-only mode. AFAIR: zpool import -f -o readonly=on -R /mnt/zroot zroot -- Sphinx of black quartz judge my vow. From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 16:19:39 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 843BD1065672 for ; Thu, 19 Jan 2012 16:19:39 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id 349F98FC17 for ; Thu, 19 Jan 2012 16:19:38 +0000 (UTC) Received: by vbbey12 with SMTP id ey12so91277vbb.13 for ; Thu, 19 Jan 2012 08:19:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Md9sdePuSRUgqRKSn61JFqlDbpQZ6+Xu/WL20w5j2g8=; b=aC71QzlS2YMeLPpv0YjmuZO/NPJ+l++jJ1JDTjIdwW/M+d+fTwX3s3xiwHdSrBRs1j EAMY/AFA+0IoSR5DlzlIXwahSneRKcXYbeAZTWwveij4sUjF2WhbSclbje4oH+AW+OVO bK7V+qh1pnAVSmiNnWaE2hNUkZz+xG35f3jfY= MIME-Version: 1.0 Received: by 10.52.89.78 with SMTP id bm14mr13056663vdb.22.1326989978336; Thu, 19 Jan 2012 08:19:38 -0800 (PST) Received: by 10.220.191.130 with HTTP; Thu, 19 Jan 2012 08:19:38 -0800 (PST) In-Reply-To: <4F183E92.9060300@gmail.com> References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F183E92.9060300@gmail.com> Date: Thu, 19 Jan 2012 08:19:38 -0800 Message-ID: From: Freddie Cash To: Volodymyr Kostyrko Content-Type: text/plain; charset=UTF-8 Cc: "freebsd-fs@freebsd.org" Subject: Re: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 16:19:39 -0000 On Thu, Jan 19, 2012 at 8:02 AM, Volodymyr Kostyrko wrote: > Martin Ranne wrote: >> >> I had a failure in one server where i try to determine if it is memory or >> cpu. It shows up as memory failure in memtest86. The result is that it >> managed to damage the zpool which is a raidz2 with 6 disks. >> >> If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool -f >> -R /mnt/zroot zroot it will reboot the computer. I have also tried to import >> it in another computer which is running 9-STABLE with the same result. On >> the second computer I used zpool -f -R /mnt/zroot "zpool-id" serv06zroot >> >> Can I get some help on how to be able to debug this and in the end be able >> to import it to repair it. >> >> Data for the second computer can be found attached. The disks in question >> are da0 to da5 in this. > > > Try importing pool in read-only mode. AFAIR: > > zpool import -f -o readonly=on -R /mnt/zroot zroot And consider -F instead of -f. You don't want to force the import, and possibly cause more damage. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 16:24:02 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 289CB1065674 for ; Thu, 19 Jan 2012 16:24:02 +0000 (UTC) (envelope-from martin.ranne@kockumsonics.com) Received: from webmail.kockumsonics.com (mail.kockumsonics.com [194.103.55.3]) by mx1.freebsd.org (Postfix) with ESMTP id ACC8C8FC13 for ; Thu, 19 Jan 2012 16:24:01 +0000 (UTC) Received: from MAILGATE.sonet.local ([192.168.12.8]) by mailgate ([192.168.12.8]) with mapi id 14.01.0355.002; Thu, 19 Jan 2012 17:24:00 +0100 From: Martin Ranne To: Volodymyr Kostyrko Thread-Topic: zpool import reboots computer Thread-Index: AczWvHf/qf1tgj/cQ3aTdT164KORY////cUA///pWMA= Date: Thu, 19 Jan 2012 16:23:59 +0000 Message-ID: <39C592E81AEC0B418EAD826FC1BBB09B25210D@mailgate> References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F183E92.9060300@gmail.com> In-Reply-To: <4F183E92.9060300@gmail.com> Accept-Language: sv-SE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.15.18] Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 MIME-Version: 1.0 Cc: "freebsd-fs@freebsd.org" Subject: RE: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 16:24:02 -0000 T24gMjAxMi0wMS0xOSAxNzowMiwgVm9sb2R5bXlyIEtvc3R5cmtvIHdyb3RlOg0KPiBNYXJ0aW4g UmFubmUgd3JvdGU6DQo+PiBJIGhhZCBhIGZhaWx1cmUgaW4gb25lIHNlcnZlciB3aGVyZSBpIHRy eSB0byBkZXRlcm1pbmUgaWYgaXQgaXMgbWVtb3J5IG9yIGNwdS4gSXQgc2hvd3MgdXAgYXMgbWVt b3J5IGZhaWx1cmUgaW4gbWVtdGVzdDg2LiBUaGUgcmVzdWx0IGlzIHRoYXQgaXQgbWFuYWdlZCB0 byBkYW1hZ2UgdGhlIHpwb29sIHdoaWNoIGlzIGEgcmFpZHoyIHdpdGggNiBkaXNrcy4NCj4+DQo+ PiBJZiBJIGJvb3QgZnJvbSBhIEZyZWVCU0QgOS4wLVJFTEVBU0UgdXNiIHN0aWNrIGFuZCBpbXBv cnQgaXQgd2l0aCB6cG9vbCAtZiAtUiAvbW50L3pyb290IHpyb290IGl0IHdpbGwgcmVib290IHRo ZSBjb21wdXRlci4gSSBoYXZlIGFsc28gdHJpZWQgdG8gaW1wb3J0IGl0IGluIGFub3RoZXIgY29t cHV0ZXIgd2hpY2ggaXMgcnVubmluZyA5LVNUQUJMRSB3aXRoIHRoZSBzYW1lIHJlc3VsdC4gT24g dGhlIHNlY29uZCBjb21wdXRlciBJIHVzZWQgenBvb2wgLWYgLVIgL21udC96cm9vdCAienBvb2wt aWQiIHNlcnYwNnpyb290DQo+Pg0KPj4gQ2FuIEkgZ2V0IHNvbWUgaGVscCBvbiBob3cgdG8gYmUg YWJsZSB0byBkZWJ1ZyB0aGlzIGFuZCBpbiB0aGUgZW5kIGJlIGFibGUgdG8gaW1wb3J0IGl0IHRv IHJlcGFpciBpdC4NCj4+DQo+PiBEYXRhIGZvciB0aGUgc2Vjb25kIGNvbXB1dGVyIGNhbiBiZSBm b3VuZCBhdHRhY2hlZC4gVGhlIGRpc2tzIGluIHF1ZXN0aW9uIGFyZSBkYTAgdG8gZGE1IGluIHRo aXMuDQo+IFRyeSBpbXBvcnRpbmcgcG9vbCBpbiByZWFkLW9ubHkgbW9kZS4gQUZBSVI6DQo+DQo+ IHpwb29sIGltcG9ydCAtZiAtbyByZWFkb25seT1vbiAtUiAvbW50L3pyb290IHpyb290DQoNCkkg anVzdCB0cmllZCB0aGF0IHdpdGggc2FtZSByZXN1bHQuIEtlcm5lbCBwYW5pYyBhbmQgcmVib290 IG9mIGNvbXB1dGVyLg0KDQoNCi0tLS0tDQpObyB2aXJ1cyBmb3VuZCBpbiB0aGlzIG1lc3NhZ2Uu DQpDaGVja2VkIGJ5IEFWRyAtIHd3dy5hdmcuY29tDQpWZXJzaW9uOiAyMDEyLjAuMTkwMSAvIFZp cnVzIERhdGFiYXNlOiAyMTA5LzQ3NTIgLSBSZWxlYXNlIERhdGU6IDAxLzE4LzEyDQo= From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 16:41:28 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2FDCC106566B; Thu, 19 Jan 2012 16:41:28 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id DE9F28FC08; Thu, 19 Jan 2012 16:41:27 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [96.47.65.170]) by cyrus.watson.org (Postfix) with ESMTPSA id 63A9546B0D; Thu, 19 Jan 2012 11:41:27 -0500 (EST) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id E615CB922; Thu, 19 Jan 2012 11:41:26 -0500 (EST) From: John Baldwin To: Kostik Belousov Date: Thu, 19 Jan 2012 11:17:28 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p10; KDE/4.5.5; amd64; ; ) References: <201201181707.21293.jhb@freebsd.org> <201201191026.09431.jhb@freebsd.org> <20120119160156.GF31224@deviant.kiev.zoral.com.ua> In-Reply-To: <20120119160156.GF31224@deviant.kiev.zoral.com.ua> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201201191117.28128.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Thu, 19 Jan 2012 11:41:27 -0500 (EST) Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 16:41:28 -0000 On Thursday, January 19, 2012 11:01:56 am Kostik Belousov wrote: > On Thu, Jan 19, 2012 at 10:26:09AM -0500, John Baldwin wrote: > > On Thursday, January 19, 2012 9:06:13 am Kostik Belousov wrote: > > > On Wed, Jan 18, 2012 at 05:07:21PM -0500, John Baldwin wrote: > > > ... > > > > What I concluded is that it would really be far simpler and more > > > > obvious if the cached timestamps were stored in the namecache entry > > > > directly rather than having multiple name cache entries validated by > > > > shared state in the nfsnode. This does mean allowing the name cache > > > > to hold some filesystem-specific state. However, I felt this was much > > > > cleaner than adding a lot more complexity to nfs_lookup(). Also, this > > > > turns out to be fairly non-invasive to implement since nfs_lookup() > > > > calls cache_lookup() directly, but other filesystems only call it > > > > indirectly via vfs_cache_lookup(). I considered letting filesystems > > > > store a void * cookie in the name cache entry and having them provide > > > > a destructor, etc. However, that would require extra allocations for > > > > NFS lookups. Instead, I just adjusted the name cache API to > > > > explicitly allow the filesystem to store a single timestamp in a name > > > > cache entry by adding a new 'cache_enter_time()' that accepts a struct > > > > timespec that is copied into the entry. 'cache_enter_time()' also > > > > saves the current value of 'ticks' in the entry. 'cache_lookup()' is > > > > modified to add two new arguments used to return the timespec and > > > > ticks value used for a namecache entry when a hit in the cache occurs. > > > > > > > > One wrinkle with this is that the name cache does not create actual > > > > entries for ".", and thus it would not store any timestamps for those > > > > lookups. To fix this I changed the NFS client to explicitly fast-path > > > > lookups of "." by always returning the current directory as setup by > > > > cache_lookup() and never bothering to do a LOOKUP or check for stale > > > > attributes in that case. > > > > > > > > The current patch against 8 is at > > > > http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch > > > ... > > > > > > So now you add 8*2+4 bytes to each namecache entry on amd64 unconditionally. > > > Current size of the struct namecache invariant part on amd64 is 72 bytes, > > > so addition of 20 bytes looks slightly excessive. I am not sure about > > > typical distribution of the namecache nc_name length, so it is unobvious > > > does the change changes the memory usage significantly. > > > > > > A flag could be added to nc_flags to indicate the presence of timestamp. > > > The timestamps would be conditionally placed after nc_nlen, we probably > > > could use union to ease the access. Then, the direct dereferences of > > > nc_name would need to be converted to some inline function. > > > > > > I can do this after your patch is committed, if you consider the memory > > > usage saving worth it. > > > > Hmm, if the memory usage really is worrying then I could move to using the > > void * cookie method instead. > > I think the current approach is better then cookie that again will be > used only for NFS. With the cookie, you still has 8 bytes for each ncp. > With union, you do not have the overhead for !NFS. > > Default setup allows for ~300000 vnodes on not too powerful amd64 machine, > the ncsizefactor 2 together with 8 bytes for cookie is 4.5MB. For 20 bytes > per ncp, we get 12MB overhead. Ok. If you want to tackle the union bits I'm happy to let you do so. That will at least break up the changes a bit. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 16:43:54 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1AFC0106566C for ; Thu, 19 Jan 2012 16:43:54 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 67EB18FC23 for ; Thu, 19 Jan 2012 16:43:52 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id SAA20631; Thu, 19 Jan 2012 18:32:31 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <4F18459F.7040309@FreeBSD.org> Date: Thu, 19 Jan 2012 18:32:31 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0) Gecko/20120111 Thunderbird/9.0 MIME-Version: 1.0 To: Martin Ranne References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> In-Reply-To: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> X-Enigmail-Version: undefined Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" Subject: Re: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 16:43:54 -0000 on 19/01/2012 17:36 Martin Ranne said the following: > I had a failure in one server where i try to determine if it is memory or cpu. It shows up as memory failure in memtest86. The result is that it managed to damage the zpool which is a raidz2 with 6 disks. > > If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool -f -R /mnt/zroot zroot it will reboot the computer. I have also tried to import it in another computer which is running 9-STABLE with the same result. On the second computer I used zpool -f -R /mnt/zroot "zpool-id" serv06zroot > > Can I get some help on how to be able to debug this and in the end be able to import it to repair it. > > Data for the second computer can be found attached. The disks in question are da0 to da5 in this. And the panic message is? -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 16:51:26 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 579511065674 for ; Thu, 19 Jan 2012 16:51:26 +0000 (UTC) (envelope-from andrnils@gmail.com) Received: from mail-bk0-f54.google.com (mail-bk0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id D4DBC8FC16 for ; Thu, 19 Jan 2012 16:51:25 +0000 (UTC) Received: by bkbc12 with SMTP id c12so173291bkb.13 for ; Thu, 19 Jan 2012 08:51:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=MRdE0Rn3xhB9NJJNFeTaMgJ7JOYA7HWSbPLJkBCkIt4=; b=H+Lpur0Emg1l5mh/1qBpI7lV2E+gN8Bm/11fkKmhpgxudowAhwr1E0Pzlh8MoG22cD F1aoRwBYW2x4i0CxF1a7xwfYYRulm/dG9RvYacgOyPH57KCSRHsbrUU9D023SrhFm1xl 8Xza//XSIECqLsqtKXIBtr0U9DgKdj44+l6RU= MIME-Version: 1.0 Received: by 10.205.121.145 with SMTP id gc17mr6128510bkc.23.1326990376923; Thu, 19 Jan 2012 08:26:16 -0800 (PST) Received: by 10.204.40.74 with HTTP; Thu, 19 Jan 2012 08:26:16 -0800 (PST) Date: Thu, 19 Jan 2012 17:26:16 +0100 Message-ID: From: Andreas Nilsson To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Booting from zfs snapshot X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 16:51:26 -0000 Hello, I'm trying to wrap my head around the process of booting from a zfs snapshot. I have hit a few roadblocks, which I hope this is the adequate list to post to regarding those. A short note on what I'm trying to achieve might be in order. In short: a nanobsd system on zfs only. I want to boot from a snapshot so that when I push out an upgrade with zfs send, I want the root filesystem to remain unchanged. The problems I've hit so far: *1 Making the zpool.cache file available *2 Having / mount via entry in fstab. *1: The zpool.cache is needed to autoimport a pool as I understand it. Is there a way to force the kernel to import a pool during bootup even though no zpool.cache is around? What does this file actually contain? I made an experiment and booted a disk with zfs root from machine a in machine b and that worked. I did partition the disk with gpart using a gpt scheme, and labeled the partition on which the pool resides as os, and upon creation of the zpool used gpt/os as device. Does this mean that as long as gpt/os is available, any machine boot this disk will have the zpool autoimported? *2: Having a line like tank/root/8.2-RELEASE-p5@ro / zfs ro 0 0 in fstab causes mount to throw an error and leave me in single user mode, when the system is booted however mount can mount a zfs snapshot just fine. Setting vfs.root.mountfrom in loader.conf works just fine though. Best regards Andreas Nilsson From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 17:36:26 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BA1C8106567C for ; Thu, 19 Jan 2012 17:36:26 +0000 (UTC) (envelope-from martin.ranne@kockumsonics.com) Received: from webmail.kockumsonics.com (mail.kockumsonics.com [194.103.55.3]) by mx1.freebsd.org (Postfix) with ESMTP id 450378FC20 for ; Thu, 19 Jan 2012 17:36:25 +0000 (UTC) Received: from MAILGATE.sonet.local ([192.168.12.8]) by mailgate ([192.168.12.8]) with mapi id 14.01.0355.002; Thu, 19 Jan 2012 18:36:24 +0100 From: Martin Ranne To: Andriy Gapon Thread-Topic: zpool import reboots computer Thread-Index: AczWvHf/qf1tgj/cQ3aTdT164KORYwAAxbSAAARQzcA= Date: Thu, 19 Jan 2012 17:36:23 +0000 Message-ID: <39C592E81AEC0B418EAD826FC1BBB09B252444@mailgate> References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F18459F.7040309@FreeBSD.org> In-Reply-To: <4F18459F.7040309@FreeBSD.org> Accept-Language: sv-SE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.15.18] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "freebsd-fs@freebsd.org" Subject: RE: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 17:36:26 -0000 On 2012-01-19 17:32, Andriy Gapon wrote:=20 on 19/01/2012 17:36 Martin Ranne said the following: >>I had a failure in one server where i try to determine if it is memory or= cpu. It shows up as memory failure in memtest86. >>The result is that it m= anaged to damage the zpool which is a raidz2 with 6 disks. >>If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool -= f -R /mnt/zroot zroot it will reboot the computer. >>I have also tried to i= mport it in another computer which is running 9-STABLE with the same result= . On the second computer I >>used zpool -f -R /mnt/zroot "zpool-id" serv06z= root=20 >>Can I get some help on how to be able to debug this and in the end be abl= e to import it to repair it. >>Data for the second computer can be found attached. The disks in question= are da0 to da5 in this. >And the panic message is? I am trying to get a crash dump but it hangs when dumping. ________________________________________ No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1901 / Virus Database: 2109/4753 - Release Date: 01/19/12 From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 17:55:15 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 589DF106564A for ; Thu, 19 Jan 2012 17:55:15 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id A311B8FC17 for ; Thu, 19 Jan 2012 17:55:14 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA21476; Thu, 19 Jan 2012 19:55:11 +0200 (EET) (envelope-from avg@FreeBSD.org) Message-ID: <4F1858FE.7020509@FreeBSD.org> Date: Thu, 19 Jan 2012 19:55:10 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0) Gecko/20120111 Thunderbird/9.0 MIME-Version: 1.0 To: Martin Ranne References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F18459F.7040309@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B252444@mailgate> In-Reply-To: <39C592E81AEC0B418EAD826FC1BBB09B252444@mailgate> X-Enigmail-Version: undefined Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" Subject: Re: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 17:55:15 -0000 on 19/01/2012 19:36 Martin Ranne said the following: > On 2012-01-19 17:32, Andriy Gapon wrote: > on 19/01/2012 17:36 Martin Ranne said the following: >>> I had a failure in one server where i try to determine if it is memory or cpu. It shows up as memory failure in memtest86. >>The result is that it managed to damage the zpool which is a raidz2 with 6 disks. > >>> If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool -f -R /mnt/zroot zroot it will reboot the computer. >>I have also tried to import it in another computer which is running 9-STABLE with the same result. On the second computer I >>used zpool -f -R /mnt/zroot "zpool-id" serv06zroot > >>> Can I get some help on how to be able to debug this and in the end be able to import it to repair it. > >>> Data for the second computer can be found attached. The disks in question are da0 to da5 in this. > >> And the panic message is? > > I am trying to get a crash dump but it hangs when dumping. Alternatives: - serial console - digital camera - eyes plus pen and paper -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 19:58:53 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E3BBE106566C; Thu, 19 Jan 2012 19:58:53 +0000 (UTC) (envelope-from martin.ranne@kockumsonics.com) Received: from webmail.kockumsonics.com (mail.kockumsonics.com [194.103.55.3]) by mx1.freebsd.org (Postfix) with ESMTP id 533E58FC14; Thu, 19 Jan 2012 19:58:52 +0000 (UTC) Received: from MAILGATE.sonet.local ([192.168.12.8]) by mailgate ([192.168.12.8]) with mapi id 14.01.0355.002; Thu, 19 Jan 2012 20:58:50 +0100 From: Martin Ranne To: Andriy Gapon Thread-Topic: zpool import reboots computer Thread-Index: AczWvHf/qf1tgj/cQ3aTdT164KORYwAAxbSAAARQzcD///SRAP//zVoQ Date: Thu, 19 Jan 2012 19:58:50 +0000 Message-ID: <39C592E81AEC0B418EAD826FC1BBB09B25253F@mailgate> References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F18459F.7040309@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B252444@mailgate> <4F1858FE.7020509@FreeBSD.org> In-Reply-To: <4F1858FE.7020509@FreeBSD.org> Accept-Language: sv-SE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.15.18] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "freebsd-fs@freebsd.org" Subject: RE: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 19:58:54 -0000 On 2012-01-19 18:55, Andriy Gapon wrote:=20 on 19/01/2012 19:36 Martin Ranne said the following: On 2012-01-19 17:32, Andriy Gapon wrote:=20 on 19/01/2012 17:36 Martin Ranne said the following: >>>>I had a failure in one server where i try to determine if it is memory = or cpu. It shows up as memory failure in memtest86. >>The result is that it= managed to damage the zpool which is a raidz2 with 6 disks. >>>>If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool= -f -R /mnt/zroot zroot it will reboot the computer. >>I have also tried to= import it in another computer which is running 9-STABLE with the same resu= lt. On the second computer I >>used zpool -f -R /mnt/zroot "zpool-id" serv0= 6zroot=20 >>>>Can I get some help on how to be able to debug this and in the end be a= ble to import it to repair it. >>>>Data for the second computer can be found attached. The disks in questi= on are da0 to da5 in this. >>>And the panic message is? >>I am trying to get a crash dump but it hangs when dumping. >Alternatives: >- serial console >- digital camera >- eyes plus pen and paper Finally here it is. Is there anything i can do in the debugger to make it p= ossible to find what is crashing in there? Fatal trap 12: page fault while in kernel mode Fatal trap 12: page fault while in kernel mode cpuid =3D 0; cpuid =3D 2; apic id =3D 00 apic id =3D 02 fault virtual address =3D 0x88 fault virtual address =3D 0x38 fault code =3D supervisor read data, page not present fault code =3D supervisor read data, page not present instruction pointer =3D 0x20:0xffffffff814a7ef5 instruction pointer =3D 0x20:0xffffffff814872a1 stack pointer =3D 0x28:0xffffff8c10252ad0 stack pointer =3D 0x28:0xffffff8c0d564f00 frame pointer =3D 0x28:0xffffff8c10252b40 frame pointer =3D 0x28:0xffffff8c0d564f30 code segment =3D base 0x0, limit 0xfffff, type 0x1b code segment =3D base 0x0, limit 0xfffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, gran 1 =3D DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags =3D processor eflags =3D interrupt enabl= ed, interrupt enabled, resume, resume, IOPL =3D 0 IOPL =3D 0 current process =3D current process =3D 2659 = (zpool) 0 [ thread pid 2659 tid 100592 ] stopped at zio_vdev_child_io+0x25: cmpq $0,0x88(%r10) db> ________________________________________ No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1901 / Virus Database: 2109/4753 - Release Date: 01/19/12 From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 20:10:26 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 177B41065692 for ; Thu, 19 Jan 2012 20:10:26 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 5F9468FC16 for ; Thu, 19 Jan 2012 20:10:24 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id WAA22746; Thu, 19 Jan 2012 22:10:21 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1RnyJR-000CKw-Es; Thu, 19 Jan 2012 22:10:21 +0200 Message-ID: <4F1878AC.6060704@FreeBSD.org> Date: Thu, 19 Jan 2012 22:10:20 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0) Gecko/20111222 Thunderbird/9.0 MIME-Version: 1.0 To: Martin Ranne References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F18459F.7040309@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B252444@mailgate> <4F1858FE.7020509@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B25253F@mailgate> In-Reply-To: <39C592E81AEC0B418EAD826FC1BBB09B25253F@mailgate> X-Enigmail-Version: undefined Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" Subject: Re: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 20:10:26 -0000 on 19/01/2012 21:58 Martin Ranne said the following: > On 2012-01-19 18:55, Andriy Gapon wrote: > on 19/01/2012 19:36 Martin Ranne said the following: > On 2012-01-19 17:32, Andriy Gapon wrote: > on 19/01/2012 17:36 Martin Ranne said the following: >>>>> I had a failure in one server where i try to determine if it is memory or cpu. It shows up as memory failure in memtest86. >>The result is that it managed to damage the zpool which is a raidz2 with 6 disks. > >>>>> If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpool -f -R /mnt/zroot zroot it will reboot the computer. >>I have also tried to import it in another computer which is running 9-STABLE with the same result. On the second computer I >>used zpool -f -R /mnt/zroot "zpool-id" serv06zroot > >>>>> Can I get some help on how to be able to debug this and in the end be able to import it to repair it. > >>>>> Data for the second computer can be found attached. The disks in question are da0 to da5 in this. > >>>> And the panic message is? > >>> I am trying to get a crash dump but it hangs when dumping. > >> Alternatives: >> - serial console >> - digital camera >> - eyes plus pen and paper > > Finally here it is. Is there anything i can do in the debugger to make it possible to find what is crashing in there? > > Fatal trap 12: page fault while in kernel mode > Fatal trap 12: page fault while in kernel mode > cpuid = 0; cpuid = 2; apic id = 00 > apic id = 02 > fault virtual address = 0x88 > fault virtual address = 0x38 > fault code = supervisor read data, page not present > fault code = supervisor read data, page not present > instruction pointer = 0x20:0xffffffff814a7ef5 > instruction pointer = 0x20:0xffffffff814872a1 > stack pointer = 0x28:0xffffff8c10252ad0 > stack pointer = 0x28:0xffffff8c0d564f00 > frame pointer = 0x28:0xffffff8c10252b40 > frame pointer = 0x28:0xffffff8c0d564f30 > code segment = base 0x0, limit 0xfffff, type 0x1b > code segment = base 0x0, limit 0xfffff, type 0x1b > = DPL 0, pres 1, long 1, def32 0, gran 1 > = DPL 0, pres 1, long 1, def32 0, gran 1 > processor eflags = processor eflags = interrupt enabled, interrupt enabled, resume, resume, IOPL = 0 > IOPL = 0 > current process = current process = 2659 (zpool) > 0 [ thread pid 2659 tid 100592 ] Hmm, two traps running almost perfectly in parallel... > stopped at zio_vdev_child_io+0x25: cmpq $0,0x88(%r10) > db> At least the 'bt' command. It could be that the panic is caused by corrupted vdev label, but not sure... -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 20:17:19 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AB1FF106566B for ; Thu, 19 Jan 2012 20:17:19 +0000 (UTC) (envelope-from amdmi3@amdmi3.ru) Received: from smtp.timeweb.ru (smtp.timeweb.ru [92.53.116.57]) by mx1.freebsd.org (Postfix) with ESMTP id 60A128FC15 for ; Thu, 19 Jan 2012 20:17:19 +0000 (UTC) Received: from [213.148.20.85] (helo=hive.panopticon) by smtp.timeweb.ru with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1RnyQ9-0006cb-Oy for freebsd-fs@FreeBSD.org; Fri, 20 Jan 2012 00:17:17 +0400 Received: from hades.panopticon (hades.panopticon [192.168.0.32]) by hive.panopticon (Postfix) with ESMTP id 9319FB84D for ; Fri, 20 Jan 2012 00:17:17 +0400 (MSK) Received: by hades.panopticon (Postfix, from userid 1000) id 8BFAAA56; Fri, 20 Jan 2012 00:17:17 +0400 (MSK) Date: Fri, 20 Jan 2012 00:17:17 +0400 From: Dmitry Marakasov To: freebsd-fs@FreeBSD.org Message-ID: <20120119201717.GC8142@hades.panopticon> References: <20120109185944.GA8140@hades.panopticon> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20120109185944.GA8140@hades.panopticon> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Subject: Re: Issues with multiple-vdev ZFS root X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 20:17:19 -0000 * Dmitry Marakasov (amdmi3@hades.panopticon) wrote: Just for the note: I've switched to multiple-vdev root pool configuration on a real machine and it works well. NAME STATE READ WRITE CKSUM hades ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 ada2p3 ONLINE 0 0 0 ada3p3 ONLINE 0 0 0 root has 1 copies and I've tried to copy it over, so it's likely located on the second mirror, still the system is bootable and I haven't seen any problems at all. The questions still remain, as this configuration is not really documented. -- Dmitry Marakasov . 55B5 0596 FF1E 8D84 5F56 9510 D35A 80DD F9D2 F77D amdmi3@amdmi3.ru ..: jabber: amdmi3@jabber.ru http://www.amdmi3.ru From owner-freebsd-fs@FreeBSD.ORG Thu Jan 19 21:02:06 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5A36D1065670 for ; Thu, 19 Jan 2012 21:02:06 +0000 (UTC) (envelope-from spork@bway.net) Received: from xena.bway.net (xena.bway.net [216.220.96.26]) by mx1.freebsd.org (Postfix) with ESMTP id EEFEE8FC08 for ; Thu, 19 Jan 2012 21:02:05 +0000 (UTC) Received: (qmail 10431 invoked by uid 0); 19 Jan 2012 21:02:05 -0000 Received: from smtp.bway.net (216.220.96.25) by xena.bway.net with (DHE-RSA-AES256-SHA encrypted) SMTP; 19 Jan 2012 21:02:05 -0000 Received: (qmail 10399 invoked by uid 90); 19 Jan 2012 21:02:03 -0000 Received: from unknown (HELO ?10.3.2.40?) (spork@96.57.144.66) by smtp.bway.net with (AES128-SHA encrypted) SMTP; 19 Jan 2012 21:02:03 -0000 From: Charles Sprickman Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Date: Thu, 19 Jan 2012 16:02:03 -0500 Message-Id: <2364175B-EF98-4A0B-91C6-9D0437CBB8EA@bway.net> To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Apple Message framework v1084) X-Mailer: Apple Mail (2.1084) Subject: ZIL on root pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Jan 2012 21:02:06 -0000 Hello, Two quick questions about 8.2-STABLE (from early June) w/v28: Is this tidbit about adding a ZIL to a "zfs on root" pool accurate? http://astralblue.livejournal.com/371755.html The explanation makes sense, and I was able to add two Intel 320 drives in a mirror to my root pool. It worked, all seemed fine. So second question... After removing the ZIL from the pool (which generated no errors or complaints), upon rebooting I'm left with a pool that no longer seems bootable (yes, I did set bootfs back). I'm at the second(?) stage boot loader: FreeBSD/x86 boot Default: zroot:/boot/kernel/kernel boot: (that prompt does not have any options to list files, etc. like the next stage of the loader) I'm going to netboot in a bit and try importing the pool, but I also wanted to know if ZIL on a root pool is inherently a "bad" thing or not. Most of our 1U boxes are in a "everything in the root pool" config and I'm actually doing some benchmarking on this box to see if with SSD prices dropping we might want to add ZIL to some select servers. In a 1U, our options are limited, so if ZIL on a root pool is a no-go, that would be good to know now rather than later. Thanks, Charles From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 00:28:51 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A3C3F106564A for ; Fri, 20 Jan 2012 00:28:51 +0000 (UTC) (envelope-from john@kozubik.com) Received: from kozubik.com (kozubik.com [216.218.240.130]) by mx1.freebsd.org (Postfix) with ESMTP id 7698A8FC15 for ; Fri, 20 Jan 2012 00:28:51 +0000 (UTC) Received: from kozubik.com (localhost [127.0.0.1]) by kozubik.com (8.14.3/8.14.3) with ESMTP id q0K08ewe077317 for ; Thu, 19 Jan 2012 16:08:40 -0800 (PST) (envelope-from john@kozubik.com) Received: from localhost (john@localhost) by kozubik.com (8.14.3/8.14.3/Submit) with ESMTP id q0K08Zmf077314 for ; Thu, 19 Jan 2012 16:08:35 -0800 (PST) (envelope-from john@kozubik.com) Date: Thu, 19 Jan 2012 16:08:35 -0800 (PST) From: John Kozubik To: freebsd-fs@freebsd.org Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Subject: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 00:28:51 -0000 We're about to invest heavily in a new ZFS infrastructure, and our plans are to: - wait for 8.3, with the updated 6gbps mps driver - Install and use LSI 9211-8i cards with newest "IT" firmware This appears to be the de facto standard for ZFS HBAs ... Is there any reason to consider other cards/vendors ? Are these indeed considered solid (provided I use the new mps in 8.3) ? Thanks. From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 00:56:34 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E89AA106564A for ; Fri, 20 Jan 2012 00:56:34 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-vx0-f182.google.com (mail-vx0-f182.google.com [209.85.220.182]) by mx1.freebsd.org (Postfix) with ESMTP id A33A58FC16 for ; Fri, 20 Jan 2012 00:56:34 +0000 (UTC) Received: by vcbfl17 with SMTP id fl17so54571vcb.13 for ; Thu, 19 Jan 2012 16:56:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=4sJ6DwaeRLKGrnjuigB0nfvgbjKlVBNk7H1pxkzCxRk=; b=kuYijv6tPJssbdDx4pAqDp3Z8bQ72amnXWLSFS0kaSHyj/oV8WhzUTdYZl3gamU/IY GZ1wGXA63UMsvwccHMZJkDA7TGl/inTB29JVsDgzIdxCGswuxo2EcdaqKbW5fo+MQJF5 FlSVk4ClkrdEYVe+PFWwW3qSczKe+QU+i6G8U= MIME-Version: 1.0 Received: by 10.220.115.135 with SMTP id i7mr8072343vcq.40.1327020993915; Thu, 19 Jan 2012 16:56:33 -0800 (PST) Received: by 10.220.191.130 with HTTP; Thu, 19 Jan 2012 16:56:33 -0800 (PST) In-Reply-To: References: Date: Thu, 19 Jan 2012 16:56:33 -0800 Message-ID: From: Freddie Cash To: John Kozubik Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 00:56:35 -0000 On Thu, Jan 19, 2012 at 4:08 PM, John Kozubik wrote: > We're about to invest heavily in a new ZFS infrastructure, and our plans are > to: > > - wait for 8.3, with the updated 6gbps mps driver > - Install and use LSI 9211-8i cards with newest "IT" firmware > > This appears to be the de facto standard for ZFS HBAs ... > > Is there any reason to consider other cards/vendors ? We're using the SuperMicro AOC-USAS2-L8i controllers with great success, via the mps driver in 8.2-STABLE (originally, when we installed the box), now running 9.0-RELEASE. WIth the 9.0 IT firmware. Haven't tried the 10.0 firmware yet. We're currently only using ZFS for storing backups (compression, dedupe, and snapshots are perfect for this). But we have plans to use similar hardware as these boxes as SAN/NAS boxes for VM servers. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 04:54:10 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 778601065672 for ; Fri, 20 Jan 2012 04:54:10 +0000 (UTC) (envelope-from john@kozubik.com) Received: from kozubik.com (kozubik.com [216.218.240.130]) by mx1.freebsd.org (Postfix) with ESMTP id 61E738FC14 for ; Fri, 20 Jan 2012 04:54:10 +0000 (UTC) Received: from kozubik.com (localhost [127.0.0.1]) by kozubik.com (8.14.3/8.14.3) with ESMTP id q0K4s97O079883; Thu, 19 Jan 2012 20:54:09 -0800 (PST) (envelope-from john@kozubik.com) Received: from localhost (john@localhost) by kozubik.com (8.14.3/8.14.3/Submit) with ESMTP id q0K4s37Q079880; Thu, 19 Jan 2012 20:54:03 -0800 (PST) (envelope-from john@kozubik.com) Date: Thu, 19 Jan 2012 20:54:03 -0800 (PST) From: John Kozubik To: Freddie Cash In-Reply-To: Message-ID: References: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 04:54:10 -0000 Hi Freddie, On Thu, 19 Jan 2012, Freddie Cash wrote: > On Thu, Jan 19, 2012 at 4:08 PM, John Kozubik wrote: >> We're about to invest heavily in a new ZFS infrastructure, and our plans are >> to: >> >> - wait for 8.3, with the updated 6gbps mps driver >> - Install and use LSI 9211-8i cards with newest "IT" firmware >> >> This appears to be the de facto standard for ZFS HBAs ... >> >> Is there any reason to consider other cards/vendors ? > > We're using the SuperMicro AOC-USAS2-L8i controllers with great > success, via the mps driver in 8.2-STABLE (originally, when we > installed the box), now running 9.0-RELEASE. WIth the 9.0 IT > firmware. Haven't tried the 10.0 firmware yet. > > We're currently only using ZFS for storing backups (compression, > dedupe, and snapshots are perfect for this). But we have plans to use > similar hardware as these boxes as SAN/NAS boxes for VM servers. Is the AOC-USAS2-L8i *actually* identical to the 9211-8i, or is it a different implementation of the same chipset ? From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 04:57:38 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8EC13106566B for ; Fri, 20 Jan 2012 04:57:38 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-vx0-f182.google.com (mail-vx0-f182.google.com [209.85.220.182]) by mx1.freebsd.org (Postfix) with ESMTP id 43EAB8FC13 for ; Fri, 20 Jan 2012 04:57:38 +0000 (UTC) Received: by vcbfl17 with SMTP id fl17so221386vcb.13 for ; Thu, 19 Jan 2012 20:57:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=za+tXIIi4rJdVRCL7z/x6cLkFmJUycXZ2zw5ozZk2H4=; b=Yoddo53kt7DeG2Sicap21eBVK4rraV9s8vzj5LchOLctkm41qYAnU7W2b8yK0id91S xAXddMQdG7CwMzNTggwqaanWgrNnpV4dLeW1SGRHJmRM8m6rm3RaXF4DOYH9h6+RwGpz AGWLSISHAQ6A7LfEVyfhN4vCLROgp6DEn0cQ0= MIME-Version: 1.0 Received: by 10.220.156.195 with SMTP id y3mr4802539vcw.50.1327035457610; Thu, 19 Jan 2012 20:57:37 -0800 (PST) Received: by 10.220.191.130 with HTTP; Thu, 19 Jan 2012 20:57:37 -0800 (PST) Received: by 10.220.191.130 with HTTP; Thu, 19 Jan 2012 20:57:37 -0800 (PST) In-Reply-To: References: Date: Thu, 19 Jan 2012 20:57:37 -0800 Message-ID: From: Freddie Cash To: John Kozubik Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 04:57:38 -0000 On Jan 19, 2012 8:54 PM, "John Kozubik" wrote: > > > Hi Freddie, > > > On Thu, 19 Jan 2012, Freddie Cash wrote: > >> On Thu, Jan 19, 2012 at 4:08 PM, John Kozubik wrote: >>> >>> We're about to invest heavily in a new ZFS infrastructure, and our plans are >>> to: >>> >>> - wait for 8.3, with the updated 6gbps mps driver >>> - Install and use LSI 9211-8i cards with newest "IT" firmware >>> >>> This appears to be the de facto standard for ZFS HBAs ... >>> >>> Is there any reason to consider other cards/vendors ? >> >> >> We're using the SuperMicro AOC-USAS2-L8i controllers with great >> success, via the mps driver in 8.2-STABLE (originally, when we >> installed the box), now running 9.0-RELEASE. WIth the 9.0 IT >> firmware. Haven't tried the 10.0 firmware yet. >> >> We're currently only using ZFS for storing backups (compression, >> dedupe, and snapshots are perfect for this). But we have plans to use >> similar hardware as these boxes as SAN/NAS boxes for VM servers. > > > > Is the AOC-USAS2-L8i *actually* identical to the 9211-8i, or is it a different implementation of the same chipset ? They're both using the LSI2008 chipset. Isn't the 9200 a RAID controller that can be flashed with the IT firmware to turn it into a 'dumb' SATA controller? The USAS2 is just a SATA controller. No onboard RAM, no BBU, no frills. From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 07:03:40 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3C5C41065672 for ; Fri, 20 Jan 2012 07:03:40 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id ECFCD8FC1A for ; Fri, 20 Jan 2012 07:03:39 +0000 (UTC) Received: by vbbey12 with SMTP id ey12so299032vbb.13 for ; Thu, 19 Jan 2012 23:03:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=8/QFH6na4WEz9PMB8Z0HONSxeStiGapXz4NZEzighOc=; b=Uox8s0OXyjq5fYdnTbVzoc4jBU971tR8N4XqjDe5KA8U/RfrbrB2Qf7Coj7L31SasA ivL3gGHAugILFNQJ8R/jHuC8o97qAyS5rlHednmJXrlnKF7OpfJsy0l9xit7jnmdJO96 Oc/GM5xHCgz3HevpekTqF7yND79T7awQWNGTY= MIME-Version: 1.0 Received: by 10.52.27.70 with SMTP id r6mr14104570vdg.41.1327041410776; Thu, 19 Jan 2012 22:36:50 -0800 (PST) Sender: rincebrain@gmail.com Received: by 10.220.179.195 with HTTP; Thu, 19 Jan 2012 22:36:50 -0800 (PST) In-Reply-To: References: Date: Fri, 20 Jan 2012 01:36:50 -0500 X-Google-Sender-Auth: lDhS3Ih1LHcgx9SbIXuel-qowCo Message-ID: From: Rich To: Freddie Cash Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 07:03:40 -0000 The 92xx can be flashed with the IT firmware to just be a dumb SAS controller, which includes just being a dumb SATA over SAS controller. :) I should just point out that the firmware for the 9211/9200 is up to 12.0.0.0 at the moment. - Rich On Thu, Jan 19, 2012 at 11:57 PM, Freddie Cash wrote: > On Jan 19, 2012 8:54 PM, "John Kozubik" wrote: >> >> >> Hi Freddie, >> >> >> On Thu, 19 Jan 2012, Freddie Cash wrote: >> >>> On Thu, Jan 19, 2012 at 4:08 PM, John Kozubik wrote: >>>> >>>> We're about to invest heavily in a new ZFS infrastructure, and our > plans are >>>> to: >>>> >>>> - wait for 8.3, with the updated 6gbps mps driver >>>> - Install and use LSI 9211-8i cards with newest "IT" firmware >>>> >>>> This appears to be the de facto standard for ZFS HBAs ... >>>> >>>> Is there any reason to consider other cards/vendors ? >>> >>> >>> We're using the SuperMicro AOC-USAS2-L8i controllers with great >>> success, via the mps driver in 8.2-STABLE (originally, when we >>> installed the box), now running 9.0-RELEASE. =A0WIth the 9.0 IT >>> firmware. =A0Haven't tried the 10.0 firmware yet. >>> >>> We're currently only using ZFS for storing backups (compression, >>> dedupe, and snapshots are perfect for this). =A0But we have plans to us= e >>> similar hardware as these boxes as SAN/NAS boxes for VM servers. >> >> >> >> Is the AOC-USAS2-L8i *actually* identical to the 9211-8i, or is it a > different implementation of the same chipset ? > > They're both using the LSI2008 chipset. > > Isn't the 9200 a RAID controller that can be flashed with the IT firmware > to turn it into a 'dumb' SATA controller? =A0The USAS2 is just a SATA > controller. No onboard RAM, no BBU, no frills. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 08:50:36 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7C095106566C for ; Fri, 20 Jan 2012 08:50:36 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.187]) by mx1.freebsd.org (Postfix) with ESMTP id 294E08FC15 for ; Fri, 20 Jan 2012 08:50:35 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mrbap3) with ESMTP (Nemesis) id 0Lmufk-1SI0kI46yS-00h5uU; Fri, 20 Jan 2012 09:50:35 +0100 Message-ID: <4F192ADA.5020903@brockmann-consult.de> Date: Fri, 20 Jan 2012 09:50:34 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110922 Thunderbird/3.1.15 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:fizP42kM/1YbZmCAU8rF0Ee2XB2d/YMKZp5K9HJS7tq FwSfNlQ5l+cC5jWgE2zyLljNYT3D0EmnsKyyRSMtoJl0oVguav l1O3zQz17h3RDHNhhr49j1XTpH1LxbdbdpCMgKMtQSUidichXb DhSKfKjvD0ZFY1eRiAAs56LrZLgYeq2isK2f5f3yvj4XfocJ3m Hc6I7ScIVztdK30ZjVQ8AqPlGwBNeN4zvpRyNpD++f+dTUhcJo A4dF4+9UqQL/tJrZ9WzF4NImLMjRUjpeVToJeUbIbCxR/rGCJW p/dsEyF038wS9/T6STxSp/Y4NYfgS+wy///S6JAL2HBOtcxHvS 1a6z3PNRNB4Qpa0Ys6mMUNryCvyYh1vIutBa+5DPK Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 08:50:36 -0000 John, Various people have problems with mps and ZFS. I am using 8-STABLE from October 2011, and on the 9211-8i HBA, I am using 9 IT firmware. In my case, it was the firmware on an SSD that caused problems. Crucial M4-CT256M4SSD2 firmware 0001 Randomly it would fail. Trying to reproduce with heavy IO didn't work. But I found that hot pulling works. Hot pulling the disk a few times while mounted causes the disk to never respond again until rebooting. (causing SCSI timeouts). When running "gpart recover da##" or "camcontrol reset ..." on the disk after it is removed, the kernel panics. The mpslsi driver does not solve the problem with the CT256M4SSD2 and firmware 0001, but firmware 0009 seems to work. Trying the 'lost disk' on another machine works. But FreeBSD needs to be rebooted, maybe for some part of the hardware to reset and forget about the disk. Sebulon with Samsung Spinpoint disks, here is a similar problem in this thread: http://forums.freebsd.org/showthread.php?t=27128 And Beeblebrox, with different Samsung Spinpoint disks: http://forums.freebsd.org/showthread.php?p=162201#post162201 And Jason Wolfe, with Seagate ST91000640SS disks (with mps): http://osdir.com/ml/freebsd-scsi/2011-11/msg00006.html (freebsd-fs list, with original post at 11/01/2011 07:13 PM CET) But with mpslsi, the problems go away he says. I tried reproducing his problem on my system (on my M4-CT256M4SSD2 0001 and my HDS5C3030ALA630), and was able to get a timeout similar to his with mpslsi (one time out of many tries), and it recovered gracefully, as he says his does. So based on that, I would say mpslsi is the safest choice. Perhaps the same problem on mps will cause a crash on any system with any disk, not just ST91000640SS disks. I am using the following disks with no known problems: Hitachi HUA723030ALA640 firmware MKAOA580 (tested with mps and mpslsi, didn't test hot pull) Seagate ST33000650NS firmware 0002 (tested with mps and mpslsi, didn't test hot pull) Hitachi HDS5C3030ALA630 firmware MEAOA580 (tested mostly with mpslsi, and tested hot pull) Crucial M4-CT256M4SSD2 firmware 0009 (tested only with mpslsi; not fully tested yet, but passes the hot pull test; has a URE which it didn't have with firmware 0001) The "hot pull test": -------------- dd if=/dev/random of=/somewhere/on/the/disk bs=128k pull disk wait 1 second put disk back in wait 1 second pull disk wait 1 second put disk back in wait 1 second hit ctrl+c on the dd command wait for messages to stop on tty1 / syslog. gpart show zpool status zpool online zpool status If gpart show does not seg fault, and zpool online causes the disk to resilver, then it is all good. (40% of the time, the bad SSD passes the test if only pulled once, and so far 0% if pulled twice, and one time out of all tests, the red lights blink on all disks on the controller when the bad disk is pulled) -------------- So, I would say that with the right combination of hardware, you have a fine system. So just test your disk however you think works best. If you want to use mps, use the "smartctl -a" loop test to make sure it handles it. If during the test you get no timeouts, I would call the test indeterminate. A pass looks like what Jason Wolfe posted in the mailing list (linked above) "SMID ... finished recovery after aborting TaskMID ...". Peter On 01/20/2012 01:08 AM, John Kozubik wrote: > > We're about to invest heavily in a new ZFS infrastructure, and our > plans are to: > > > - wait for 8.3, with the updated 6gbps mps driver > > - Install and use LSI 9211-8i cards with newest "IT" firmware > > > This appears to be the de facto standard for ZFS HBAs ... > > Is there any reason to consider other cards/vendors ? > > Are these indeed considered solid (provided I use the new mps in 8.3) ? > > Thanks. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de -------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 08:55:21 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BC4DF106566C for ; Fri, 20 Jan 2012 08:55:21 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.17.10]) by mx1.freebsd.org (Postfix) with ESMTP id 4FFD08FC0C for ; Fri, 20 Jan 2012 08:55:21 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mreu3) with ESMTP (Nemesis) id 0LbTTT-1SUKLz176Z-00lDcS; Fri, 20 Jan 2012 09:55:20 +0100 Message-ID: <4F192BF7.2020500@brockmann-consult.de> Date: Fri, 20 Jan 2012 09:55:19 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110922 Thunderbird/3.1.15 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <20120109185944.GA8140@hades.panopticon> <20120119201717.GC8142@hades.panopticon> In-Reply-To: <20120119201717.GC8142@hades.panopticon> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:YiiwOlUf+A4+DA3jpukLMCzEf1nWaWD+Fb7d3IHmKnN MqZT7O0s/wr9SrM72+vWLix87X5LNe0LXgWoQ+fKSsy+WibW37 vNdHlH/A0qP2/4LeomIyiUKUKEFKVG0X4dW7fBOignOUaFcaYV qmySVabhq+PHfTgZ6vjxmE9HloxPXowqZNOAeh+mJJNnMhynFO JtQ+vrV1KV1Kih+MyFaBC+PPpKjCGN5C5FsaxvTpsZB4DD4z5j XgyVgLEcBK2/+BJcqBU5auQbfNjj89RrMQuS+gp7/2DnyM0nxl yq1cKI1wPHB6Q7tJOCQZEgpmGkcIC65A49SRWjpaDVgYhPdpEZ ouXNe87GKthvEWH/EHQhnuKIvn0nnX5EgdF76KvV2 Subject: Re: Issues with multiple-vdev ZFS root X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 08:55:21 -0000 On 01/19/2012 09:17 PM, Dmitry Marakasov wrote: > * Dmitry Marakasov (amdmi3@hades.panopticon) wrote: > > Just for the note: I've switched to multiple-vdev root pool > configuration on a real machine and it works well. > > NAME STATE READ WRITE CKSUM > hades ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > ada0p3 ONLINE 0 0 0 > ada1p3 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > ada2p3 ONLINE 0 0 0 > ada3p3 ONLINE 0 0 0 > > root has 1 copies and I've tried to copy it over, so it's likely > located on the second mirror, still the system is bootable and I > haven't seen any problems at all. > > The questions still remain, as this configuration is not really > documented. > To test your virtualbox one disk theory, you could create a sliced up root pool on a single disk: # gpart show => 34 ... da1 GPT (10G) 34 2014 - free - (1M) 2048 128 1 freebsd-boot (64k) 2176 1920 - free - (960k) 4096 ... 2 freebsd-zfs (2G) ... ... 3 freebsd-zfs (2G) ... ... 4 freebsd-zfs (2G) ... ... 5 freebsd-zfs (2G) # zpool status zroot ... NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/root1 ONLINE 0 0 0 gpt/root2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/root3 ONLINE 0 0 0 gpt/root4 ONLINE 0 0 0 -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de -------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 09:09:17 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 69A69106564A for ; Fri, 20 Jan 2012 09:09:17 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta06.emeryville.ca.mail.comcast.net (qmta06.emeryville.ca.mail.comcast.net [76.96.30.56]) by mx1.freebsd.org (Postfix) with ESMTP id 501018FC0A for ; Fri, 20 Jan 2012 09:09:17 +0000 (UTC) Received: from omta11.emeryville.ca.mail.comcast.net ([76.96.30.36]) by qmta06.emeryville.ca.mail.comcast.net with comcast id Pl9G1i0030mlR8UA6l9Ga6; Fri, 20 Jan 2012 09:09:16 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta11.emeryville.ca.mail.comcast.net with comcast id Pl9G1i0031t3BNj8Xl9GMg; Fri, 20 Jan 2012 09:09:16 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id DF7E4102C19; Fri, 20 Jan 2012 01:09:15 -0800 (PST) Date: Fri, 20 Jan 2012 01:09:15 -0800 From: Jeremy Chadwick To: Peter Maloney Message-ID: <20120120090915.GA90876@icarus.home.lan> References: <4F192ADA.5020903@brockmann-consult.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4F192ADA.5020903@brockmann-consult.de> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 09:09:17 -0000 On Fri, Jan 20, 2012 at 09:50:34AM +0100, Peter Maloney wrote: > John, > > Various people have problems with mps and ZFS. > > I am using 8-STABLE from October 2011, and on the 9211-8i HBA, I am > using 9 IT firmware. In my case, it was the firmware on an SSD that > caused problems. > Crucial M4-CT256M4SSD2 firmware 0001 My below comment is unrelated to mps driver bugs and so on, but: Owners of Crucial m4 SSDs need to be aware of this catastrophic problem with the drives: Crucial m4 SSDs begin to fail (data loss) starting at the 5200 power-on-hours count. Apparently the problem is triggered by some sort of SMART-related ordeal as well (I have no details). This is a firmware bug and has been confirmed by Crucial. A firmware fix just came out a few days ago for the problem. Here's the URL where I was made aware of this problem (note the trailing hyphen please): http://www.dslreports.com/forum/r26745697- And media confirmations: http://www.theverge.com/2012/1/17/2713178/crucial-m4-ssd-firmware-update-fixes-recurring-bsod http://www.anandtech.com/show/5308/crucial-to-fix-m4-bsod-issue-in-two-weeks http://www.anandtech.com/show/5424/crucial-provides-a-firmware-update-for-m4-to-fix-the-bsod-issue I've now added Crucial to my SSD brands to avoid (currently OCZ and Crucial). Welcome to year 20xx, where nobody actually does quality assurance properly. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 09:09:42 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 88A48106566B; Fri, 20 Jan 2012 09:09:42 +0000 (UTC) (envelope-from martin.ranne@kockumsonics.com) Received: from webmail.kockumsonics.com (mail.kockumsonics.com [194.103.55.3]) by mx1.freebsd.org (Postfix) with ESMTP id DCD298FC16; Fri, 20 Jan 2012 09:09:41 +0000 (UTC) Received: from MAILGATE.sonet.local ([192.168.12.8]) by mailgate ([192.168.12.8]) with mapi id 14.01.0355.002; Fri, 20 Jan 2012 10:09:39 +0100 From: Martin Ranne To: Andriy Gapon Thread-Topic: zpool import reboots computer Thread-Index: AczWvHf/qf1tgj/cQ3aTdT164KORYwAAxbSAAARQzcD///SRAP//zVoQgABYagD//xWRYA== Date: Fri, 20 Jan 2012 09:09:38 +0000 Message-ID: <39C592E81AEC0B418EAD826FC1BBB09B25284B@mailgate> References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F18459F.7040309@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B252444@mailgate> <4F1858FE.7020509@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B25253F@mailgate> <4F1878AC.6060704@FreeBSD.org> In-Reply-To: <4F1878AC.6060704@FreeBSD.org> Accept-Language: sv-SE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.15.18] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "freebsd-fs@freebsd.org" Subject: RE: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 09:09:42 -0000 On 2012-01-19 21:10, Andriy Gapon wrote:=20 >on 19/01/2012 21:58 Martin Ranne said the following: >>On 2012-01-19 18:55, Andriy Gapon wrote:=20 >>on 19/01/2012 19:36 Martin Ranne said the following: >>>On 2012-01-19 17:32, Andriy Gapon wrote:=20 >>>on 19/01/2012 17:36 Martin Ranne said the following: >>>>>>I had a failure in one server where i try to determine if it is memor= y or cpu. It shows up as memory failure in memtest86. >>The result is that = it managed to damage the zpool which is a raidz2 with 6 disks. >>>>>>If I boot from a FreeBSD 9.0-RELEASE usb stick and import it with zpo= ol -f -R /mnt/zroot zroot it will reboot the computer. >>I have also tried = to import it in another computer which is running 9-STABLE with the same re= sult. On the second computer I >>used zpool -f -R /mnt/zroot "zpool-id" ser= v06zroot=20 >>>>>>Can I get some help on how to be able to debug this and in the end be= able to import it to repair it. >>>>>>Data for the second computer can be found attached. The disks in ques= tion are da0 to da5 in this. >>>>>And the panic message is? >>>>I am trying to get a crash dump but it hangs when dumping. >>>Alternatives: >>>- serial console >>>- digital camera >>>- eyes plus pen and paper >>Finally here it is. Is there anything i can do in the debugger to make it= possible to find what is crashing in there? >>Fatal trap 12: page fault while in kernel mode >>Fatal trap 12: page fault while in kernel mode >>cpuid =3D 0; cpuid =3D 2; apic id =3D 00 >>apic id =3D 02 >>fault virtual address =3D 0x88 >>fault virtual address =3D 0x38 >>fault code =3D supervisor read data, page not present >>fault code =3D supervisor read data, page not present >>instruction pointer =3D 0x20:0xffffffff814a7ef5 >>instruction pointer =3D 0x20:0xffffffff814872a1 >>stack pointer =3D 0x28:0xffffff8c10252ad0 >>stack pointer =3D 0x28:0xffffff8c0d564f00 >>frame pointer =3D 0x28:0xffffff8c10252b40 >>frame pointer =3D 0x28:0xffffff8c0d564f30 >>code segment =3D base 0x0, limit 0xfffff, type 0x1b >>code segment =3D base 0x0, limit 0xfffff, type 0x1b >> =3D DPL 0, pres 1, long 1, def32 0, gran 1 >> =3D DPL 0, pres 1, long 1, def32 0, gran 1 >>processor eflags =3D processor eflags =3D interrupt ena= bled, interrupt enabled, resume, resume, IOPL =3D 0 >>IOPL =3D 0 >>current process =3D current process =3D 265= 9 (zpool) >>0 [ thread pid 2659 tid 100592 ] >Hmm, two traps running almost perfectly in parallel... >stopped at zio_vdev_child_io+0x25: cmpq $0,0x88(%r10) >db> >At least the 'bt' command. >It could be that the panic is caused by corrupted vdev label, but not sure= ... I tried again to get into the debugger. It will not always work as it freez= es before i get to the prompt most of the times but here it is. Any other c= ommands to run in the debugger to get better information to help solve this= ? I used the command zpool import -F -f -o readonly=3Don -R /mnt/serv06 zroot Result is the following Fatal trap 12: page fault while in kernel mode Fatal trap 12: page fault while in kernel mode cpuid =3D 0; cpuid =3D 5; apic id =3D 00 apic id =3D 05 fault virtual address =3D 0x38 fault virtual address =3D 0x88 fault code =3D supervisor read data, page not present fault code =3D supervisor read data, page not present instruction pointer =3D 0x20:0xffffffff814872a1 instruction pointer =3D 0x20:0xffffffff814a7ef5 stack pointer =3D 0x28:0xffffff8c0d564f00 stack pointer =3D 0x28:0xffffff8c0ffd7ad0 frame pointer =3D 0x28:0xffffff8c0d564f30 frame pointer =3D 0x28:0xffffff8c0ffd7b40 code segment =3D base 0x0, limit 0xfffff, type 0x1b code segment =3D base 0x0, limit 0xfffff, type 0x1b =3D DPL 0, pres 1, long 1, def32 0, gran 1 =3D DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags =3D processor eflags =3D interrupt enabled, in= terrupt enabled, resume, resume, IOPL =3D 0 IOPL =3D 0 current process =3D current process =3D 0 (system_tas= k1_3) 26[ thread pid 0 tid 100099 ] Stopped at vdev_is_dead+0x1: cmpq $0x5,0x28(%rdi) db> bt Tracing pid 0 tid 100099 td 0xfffffe000e546460 vdev_is_dead() at vdev_is_dead+0x1 vdev_mirror_child_select() at vdev_mirror_child_select+0x67 vdev_mirror_io_start() at vdev_mirror_io_start+0x24c zio_vdev_io_start() at zio_vdev_io_start+0x232 zio_execute() at zio_execute+0xc3 zio_gang_assemble() at zio_gang_assemble+0x1b zio_execute() at zio_execute+0xc3 arc_read_nolock() at arc_read_nolock+0x6d1 arc_read() at arc_read+0x93 traverse_prefetcher() at traverse_prefetcher+0x103 traverse_visitbp() at traverse_visitbp+0x21c traverse_dnode() at traverse_dnode+0x7c traverse_visitbp() at traverse_visitbp+0x3ff traverse_visitbp() at traverse_visitbp+0x316 traverse_visitbp() at traverse_visitbp+0x316 traverse_visitbp() at traverse_visitbp+0x316 traverse_visitbp() at traverse_visitbp+0x316 traverse_visitbp() at traverse_visitbp+0x316 traverse_visitbp() at traverse_visitbp+0x316 traverse_dnode() at traverse_dnode+0x7c traverse_visitbp() at traverse_visitbp+0x48c traverse_prefetch_thread() at traverse_prefetch_thread+0x78 taskq_run() at taskq_run+0x13 taskqueue_run_locked() at taskqueue_run_locked+0x85 taskqueue_thread_loop() at taskqueue_thread_loop+0x46 fork_exit() at fork_exit+0x11f fork_trampoline() at fork_trampoline+0xe --- trap 0, rip =3D 0, rsp =3D 0xffffff8c0d565d00, rbp =3D 0 --- db> //Martin Ranne ________________________________________ No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1901 / Virus Database: 2109/4754 - Release Date: 01/19/12 From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 09:30:21 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1127D106566C for ; Fri, 20 Jan 2012 09:30:21 +0000 (UTC) (envelope-from michael@fuckner.net) Received: from dedihh.fuckner.net (dedihh.fuckner.net [81.209.183.161]) by mx1.freebsd.org (Postfix) with ESMTP id BEF528FC1A for ; Fri, 20 Jan 2012 09:30:20 +0000 (UTC) Received: from dedihh.fuckner.net (localhost [127.0.0.1]) by dedihh.fuckner.net (Postfix) with ESMTP id 3625A118AB for ; Fri, 20 Jan 2012 10:13:27 +0100 (CET) X-Virus-Scanned: amavisd-new at fuckner.net Received: from dedihh.fuckner.net ([127.0.0.1]) by dedihh.fuckner.net (dedihh.fuckner.net [127.0.0.1]) (amavisd-new, port 10024) with SMTP id Rr4iRqvPOMua for ; Fri, 20 Jan 2012 10:13:18 +0100 (CET) Received: from fuckner.delnet (unknown [85.183.0.195]) by dedihh.fuckner.net (Postfix) with ESMTPA id D4905118A2 for ; Fri, 20 Jan 2012 10:13:17 +0100 (CET) Message-ID: <4F192FDD.6090409@fuckner.net> Date: Fri, 20 Jan 2012 10:11:57 +0100 From: Michael Fuckner User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:9.0) Gecko/20111222 Thunderbird/9.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <4F192ADA.5020903@brockmann-consult.de> In-Reply-To: <4F192ADA.5020903@brockmann-consult.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 09:30:21 -0000 On 01/20/2012 09:50 AM, Peter Maloney wrote: Hi all, > The "hot pull test": > -------------- > dd if=/dev/random of=/somewhere/on/the/disk bs=128k > pull disk > wait 1 second > put disk back in > wait 1 second this is way too fast- if a device is added or removed there is a complete rediscovery on the SAS-bus which takes at least 15 seconds. In production environments this short span wouldn't happen, so this test may produce segfaults, but it is not realistic. Regards, Michael! From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 09:36:35 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0CBCD1065675 for ; Fri, 20 Jan 2012 09:36:35 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.171]) by mx1.freebsd.org (Postfix) with ESMTP id 9343E8FC12 for ; Fri, 20 Jan 2012 09:36:34 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mreu1) with ESMTP (Nemesis) id 0LpiwE-1SKjxB0v70-00eqSp; Fri, 20 Jan 2012 10:36:31 +0100 Message-ID: <4F19359E.6030901@brockmann-consult.de> Date: Fri, 20 Jan 2012 10:36:30 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110922 Thunderbird/3.1.15 MIME-Version: 1.0 To: Jeremy Chadwick References: <4F192ADA.5020903@brockmann-consult.de> <20120120090915.GA90876@icarus.home.lan> In-Reply-To: <20120120090915.GA90876@icarus.home.lan> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:J6OkcmyKltCTQJ8d2erwFmkpv4hg7f5m1QJ4/cPZudI UiKSC2Ctiq6Fm/uzfgCwsBw5fBG9V3zHYHxzci2eV3m+r9PF1+ tsApwFH12qsweWHzo80nIBRTRUsDYjRkUpuM0V/QbA2ITM7fNF lLhQjeuslUvv1Y9Q0JDEyiePJ78fTqn62vurNCe+X3On5/hTez oiUKdBCxI90rhTYRs3XWuQ/6V02+Hyma+MLPAbawC2u3QrXBb0 3qnlf1eGYbo6KdPtJRc0HNkhP1M87ogZ8+TGKCMZ1s2HhNzUk+ 0PISiSJ5MLR/rngHOu/kniBzLROU6i7Wt4ftf8PZrTUZ/cuNl6 jtnFf4gTn6fy+EC8RLm70EEDKld+RkCUAds4KY7nY Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 09:36:35 -0000 On 01/20/2012 10:09 AM, Jeremy Chadwick wrote: > On Fri, Jan 20, 2012 at 09:50:34AM +0100, Peter Maloney wrote: >> John, >> >> Various people have problems with mps and ZFS. >> >> I am using 8-STABLE from October 2011, and on the 9211-8i HBA, I am >> using 9 IT firmware. In my case, it was the firmware on an SSD that >> caused problems. >> Crucial M4-CT256M4SSD2 firmware 0001 > My below comment is unrelated to mps driver bugs and so on, but: > > Owners of Crucial m4 SSDs need to be aware of this catastrophic problem > with the drives: Crucial m4 SSDs begin to fail (data loss) starting at > the 5200 power-on-hours count. Apparently the problem is triggered by > some sort of SMART-related ordeal as well (I have no details). This is > a firmware bug and has been confirmed by Crucial. > > A firmware fix just came out a few days ago for the problem. Here's the > URL where I was made aware of this problem (note the trailing hyphen > please): > > http://www.dslreports.com/forum/r26745697- > Thanks for the notice, but I am aware of this, and that the newly fixed driver is not compatible with SAS expanders (which implies that there are hacks involved)... so I still need to wait. :( > And media confirmations: > > http://www.theverge.com/2012/1/17/2713178/crucial-m4-ssd-firmware-update-fixes-recurring-bsod > http://www.anandtech.com/show/5308/crucial-to-fix-m4-bsod-issue-in-two-weeks > http://www.anandtech.com/show/5424/crucial-provides-a-firmware-update-for-m4-to-fix-the-bsod-issue > > I've now added Crucial to my SSD brands to avoid (currently OCZ and > Crucial). Welcome to year 20xx, where nobody actually does quality > assurance properly. What is your problem with OCZ SSDs? -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de -------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 09:56:59 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E931106566B for ; Fri, 20 Jan 2012 09:56:59 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.171]) by mx1.freebsd.org (Postfix) with ESMTP id D5BBC8FC0C for ; Fri, 20 Jan 2012 09:56:58 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mrbap4) with ESMTP (Nemesis) id 0MEFDM-1RqJs12gVU-00FPZ1; Fri, 20 Jan 2012 10:56:57 +0100 Message-ID: <4F193A68.6050805@brockmann-consult.de> Date: Fri, 20 Jan 2012 10:56:56 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110922 Thunderbird/3.1.15 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <4F192ADA.5020903@brockmann-consult.de> <4F192FDD.6090409@fuckner.net> In-Reply-To: <4F192FDD.6090409@fuckner.net> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:6zSdEYWPKyoSG5AcwET2wSO6srv0Zxuyk3LXNykhJ2w xJ0QmDwLCMKI97s8daBejwIvBKSS2HxqUSeCG++/X2tnsFpuN9 SBERcKzXJCGtZ2/EonGEA4pip0zAQqMyjo1UI8b/lCsGtRyuxD 6bKk++Y+cJM6KIEjUTAmq/TbCBNaeRKG8mz+oJybswlUWXARxF b2kzWUquRGD8cuEIh6BMRgtr9mwLAiUTtJe7pNYhIcaMkv3W8J q66tqVWk46qD10uHmoNyqh+Yra/2/mAfHmsb9duYDLJyXqoIdJ XU1emC/XN8YuQOnavqCSXxRQ7yPnt/tho9+bLEbZ6iVcIgx0oY S+OGOBkN2KL8dBzxjdaHuZjJ4Y8nCeLoquU9jgkFN Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 09:56:59 -0000 On 01/20/2012 10:11 AM, Michael Fuckner wrote: > On 01/20/2012 09:50 AM, Peter Maloney wrote: > > > Hi all, > > >> The "hot pull test": >> -------------- >> dd if=/dev/random of=/somewhere/on/the/disk bs=128k >> pull disk >> wait 1 second >> put disk back in >> wait 1 second > > > this is way too fast- if a device is added or removed there is a > complete rediscovery on the SAS-bus which takes at least 15 seconds. Good point. But as a test, it was very reliable for me in determining that the SSD firmware version was the fault. And I think it is true that it takes about 15 seconds before the "zpool online ..." will work. So let's insert "wait 15+ seconds" before the "zpool online" command. And optionally change all waits to 15+ seconds, depending on what you want to prove (production-like environment vs. make no compromise to make it fail). > > In production environments this short span wouldn't happen, so this > test may produce segfaults, but it is not realistic. I am not sure if you are referring to my seg faults I caused, but FYI when the timeouts happened on their own, without hot pulling or issuing any commands to test it, I could always cause a seg fault minutes, hours or days later by running either: - gpart show or - gpart show da## and a panic by running: - gpart recover da## or - camcontrol reset 0:#:0 > > Regards, > Michael! > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de -------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 10:01:12 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 41A7B106566C for ; Fri, 20 Jan 2012 10:01:12 +0000 (UTC) (envelope-from prvs=13668cec48=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id C2D958FC19 for ; Fri, 20 Jan 2012 10:01:11 +0000 (UTC) X-Spam-Processed: mail1.multiplay.co.uk, Fri, 20 Jan 2012 09:50:12 +0000 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50017636611.msg for ; Fri, 20 Jan 2012 09:50:11 +0000 X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=13668cec48=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <2E561EFEF6054756B96B0416D1075D75@multiplay.co.uk> From: "Steven Hartland" To: "Jeremy Chadwick" , "Peter Maloney" References: <4F192ADA.5020903@brockmann-consult.de> <20120120090915.GA90876@icarus.home.lan> Date: Fri, 20 Jan 2012 09:50:10 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 10:01:12 -0000 ----- Original Message ----- From: "Jeremy Chadwick" > My below comment is unrelated to mps driver bugs and so on, but: > > Owners of Crucial m4 SSDs need to be aware of this catastrophic problem > with the drives: Crucial m4 SSDs begin to fail (data loss) starting at > the 5200 power-on-hours count. Apparently the problem is triggered by > some sort of SMART-related ordeal as well (I have no details). This is > a firmware bug and has been confirmed by Crucial. > > A firmware fix just came out a few days ago for the problem. Here's the > URL where I was made aware of this problem (note the trailing hyphen > please): > > http://www.dslreports.com/forum/r26745697- > > And media confirmations: > > http://www.theverge.com/2012/1/17/2713178/crucial-m4-ssd-firmware-update-fixes-recurring-bsod > http://www.anandtech.com/show/5308/crucial-to-fix-m4-bsod-issue-in-two-weeks > http://www.anandtech.com/show/5424/crucial-provides-a-firmware-update-for-m4-to-fix-the-bsod-issue > > I've now added Crucial to my SSD brands to avoid (currently OCZ and > Crucial). Welcome to year 20xx, where nobody actually does quality > assurance properly. OCZ is on our list as the drives get into a state of VERY low throughput which even a secure erase can't fix. Corsair disk using the same chipset don't have this issues, so that's what we're using now :) Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 10:10:29 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 99419106564A for ; Fri, 20 Jan 2012 10:10:29 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from mail.digiware.nl (mail.ip6.digiware.nl [IPv6:2001:4cb8:1:106::2]) by mx1.freebsd.org (Postfix) with ESMTP id 32F718FC12 for ; Fri, 20 Jan 2012 10:10:29 +0000 (UTC) Received: from rack1.digiware.nl (localhost.digiware.nl [127.0.0.1]) by mail.digiware.nl (Postfix) with ESMTP id BA733153439 for ; Fri, 20 Jan 2012 11:10:27 +0100 (CET) X-Virus-Scanned: amavisd-new at digiware.nl Received: from mail.digiware.nl ([127.0.0.1]) by rack1.digiware.nl (rack1.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id O-bbRlI7r36Z for ; Fri, 20 Jan 2012 11:10:26 +0100 (CET) Received: from [127.0.0.1] (opteron [192.168.10.67]) by mail.digiware.nl (Postfix) with ESMTP id 84AD7153433 for ; Fri, 20 Jan 2012 11:10:26 +0100 (CET) Message-ID: <4F193D90.9020703@digiware.nl> Date: Fri, 20 Jan 2012 11:10:24 +0100 From: Willem Jan Withagen User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Subject: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 10:10:29 -0000 Hi, I need to run this too big MySQL database on a too small development server, so I need to tweak what I have there.... CPU (4 core HT XEON@3Ghz) is more than powerful enough since the query rate is low, but the amount of data is huge. (50Gb) Memory (16G) could be better, but all slots are full. The server is not really swapping. Now my question is more about the SSD configuration. (BTW adding 1 SSD got the insert rate up from 100/sec to > 1000/sec, once the cache was loaded.) The database is on a mirror of 2 1T disks: ada0: ATA-8 SATA 3.x device ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled and there are 2 SSDs: ada2: ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled What I've currently done is partition all disks (also the SSDs) with GPT like below: batman# zpool iostat -v capacity operations bandwidth pool alloc free read write read write ------------- ----- ----- ----- ----- ----- ----- zfsboot 50.0G 49.5G 1 13 46.0K 164K mirror 50.0G 49.5G 1 13 46.0K 164K gpt/boot4 - - 0 5 23.0K 164K gpt/boot6 - - 0 5 22.9K 164K ------------- ----- ----- ----- ----- ----- ----- zfsdata 59.4G 765G 12 62 250K 1.30M mirror 59.4G 765G 12 62 250K 1.30M gpt/data4 - - 5 15 127K 1.30M gpt/data6 - - 5 15 127K 1.30M gpt/log2 11M 1005M 0 22 12 653K gpt/log3 11.1M 1005M 0 22 12 652K cache - - - - - - gpt/cache2 9.99G 26.3G 27 53 1.20M 5.30M gpt/cache3 9.85G 26.4G 28 54 1.24M 5.23M ------------- ----- ----- ----- ----- ----- ----- disks 4 and 6 are naming remains of pre ahci times and are ada0 and ada1. So the hardisks have the "std" zfs setup: a boot-pool and a data-pool. The SSD's if partitioned and assigned to zfsdata with: gpart create -s GPT ada2 gpart create -s GPT ada3 gpart add -t freebsd-zfs -l log2 -s 1G ada2 gpart add -t freebsd-zfs -l log3 -s 1G ada3 gpart add -t freebsd-zfs -l cache2 ada2 gpart add -t freebsd-zfs -l cache3 ada3 zpool add zfsdata log /dev/gpt/log* zpool add zfsdata cache /dev/gpt/cache* Now the question would be are the GPT partitions correctly aligned to give optimal performance? The harddisks are still std 512byte sectors, so that would be alright? The SSD's I have my doubts..... Good thing is that v28 allow you to toy with log and cache without loosing data. So I could redo the recreation of cache and log relatively easy. I'd rather not redo the DB build since that takes a few days. :( But before loading the DB, I did use some of the tuning suggestions like using different recordsize for db-logs and innodb files. Anybody suggestions and/or experience with this? Thanx, --WjW From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 11:29:58 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7A168106564A for ; Fri, 20 Jan 2012 11:29:58 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta04.emeryville.ca.mail.comcast.net (qmta04.emeryville.ca.mail.comcast.net [76.96.30.40]) by mx1.freebsd.org (Postfix) with ESMTP id 594BF8FC12 for ; Fri, 20 Jan 2012 11:29:57 +0000 (UTC) Received: from omta13.emeryville.ca.mail.comcast.net ([76.96.30.52]) by qmta04.emeryville.ca.mail.comcast.net with comcast id PnUz1i00517UAYkA4nVxBk; Fri, 20 Jan 2012 11:29:57 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta13.emeryville.ca.mail.comcast.net with comcast id PnVw1i00M1t3BNj8ZnVx8b; Fri, 20 Jan 2012 11:29:57 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id BAD03102C19; Fri, 20 Jan 2012 03:29:56 -0800 (PST) Date: Fri, 20 Jan 2012 03:29:56 -0800 From: Jeremy Chadwick To: Peter Maloney Message-ID: <20120120112956.GA91803@icarus.home.lan> References: <4F192ADA.5020903@brockmann-consult.de> <20120120090915.GA90876@icarus.home.lan> <4F19359E.6030901@brockmann-consult.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4F19359E.6030901@brockmann-consult.de> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 11:29:58 -0000 On Fri, Jan 20, 2012 at 10:36:30AM +0100, Peter Maloney wrote: > On 01/20/2012 10:09 AM, Jeremy Chadwick wrote: > > And media confirmations: > > > > http://www.theverge.com/2012/1/17/2713178/crucial-m4-ssd-firmware-update-fixes-recurring-bsod > > http://www.anandtech.com/show/5308/crucial-to-fix-m4-bsod-issue-in-two-weeks > > http://www.anandtech.com/show/5424/crucial-provides-a-firmware-update-for-m4-to-fix-the-bsod-issue > > > > I've now added Crucial to my SSD brands to avoid (currently OCZ and > > Crucial). Welcome to year 20xx, where nobody actually does quality > > assurance properly. > > What is your problem with OCZ SSDs? Extremely high failure rate compared to other vendors (Intel was the best until the recent 320-series "8MByte capacity" firmware fiasco), and all sorts of performance problems all across the board. Every single time I hear about a new OCZ SSD product, I wait 8-10 weeks and suddenly there's some sort of major bug/quirk found with it. I grew tired of following the trend. I don't know how else to describe it, honestly. If others are cool with it, that's fine -- honest, no argument, everyone should make their own decisions based on what works for them. But for me, it doesn't jibe. What I've been doing for years with a pretty good success rate: before considering any "consumer" product (especially if product reliability matters in your environment, e.g. you don't have HA or 200 spare fail-over boxes available at any moment), visit the Support Forum of the vendor, subcategory relating to the item you're interested in. Spend a few hours over the course of 3-4 weeks reading end-user reports and experiences. Yes, I am well aware that most end-users cannot diagnose problems worth a damn, but it doesn't matter -- all you have to look for is common trends in reports to get a feel for something. If you aren't left with that "warm fuzzy" feeling (SAs here should know what I mean), take note of your reasons and move on to another product. When a colleague (not someone online) asks you "What do you mean you don't like FooProduct?" you can say "here's why". This is what I did before choosing to invest in Intel SSDs for our servers, as well as for my workstation (Windows box) and my home FreeBSD box. I even did the latter with regards to some perl software I wrote for my own purposes. I went looking for a config file parsing perl module that did what I needed. I tried 15 modules. FIFTEEN OF THEM. I began to lose track which ones I tried and why they sucked. So, in the software repo of the program I wrote, I made this file: -rw------- 1 jdc users 4368 Mar 5 2008 horrible_perl_modules.txt Which documented the shortcomings/issues I had with them. A few years later, someone asked me for this program I wrote so I sent them a tarball. The following morning I had a mail from them saying "Oh my god, horrible_perl_modules.txt! I wish more people did write-ups like this in their software explaining why they used ThingA and why Thing[XYZ] didn't work *for this program specifically*!" Yeah well, that's just how I operate. Back to my method -- it's in no way shape or form fail-proof. For example, I have two Intel 320-series SSDs that have not been bit by the "8MByte capacity" bug but I pulled them out of use the INSTANT Intel confirmed its existence and replaced them with either X25-M or 510-series units. I didn't jump on the Intel Cougar Point bandwagon (I waited a year), thus avoided the SATA-related B2 stepping bug (who will remember that in years to come?). But I can't tell you how many times I've sighed and said "dodged that bullet!" It's far from perfect, but it's a hell of a lot better than making a decision based on some biased hardware review site benchmarks, or even word-of-mouth (which matters a lot, but only if the person who's selling you the product has the same belief system you do ;-) ). You have to visit the Support Forums, it's really the only way you'll know of what trouble you might be getting yourself into -- and you can then determine if that risk is worth it or not. If it is, as I said, totally cool. If it isn't, also cool. Whatever works for you! When it comes to products, I don't tolerate vendors' "hey, mistakes happen" behaviour, and (when I can) I speak with my wallet, because at the end of the day that's really the only way to "talk" to a vendor. All this stuff is made by humans, and we're not perfect beings. I accept that. Certain technology today is more reliable than it was 20-25 years ago, while other technology isn't. We've had to make a lot of trade-offs as technology has evolved though, and some (most?) of those I do not agree with. It's all about quality assurance and decent testing or lack thereof. It often shocks me how many companies have QA departments who only test what they're shown/told to test and not actually assure quality. So many QA divisions do not understand the innards of what they're testing; "run this script, look at these numbers" seems to be the modus operandi. We've got it all wrong. Finally, please note: I am in no way shape or form an "Intel fanboy". I use whatever products I choose at the time. I'd rather not cite examples in this mail (it's long enough as is), nor privately, but believe me, I have a list of lots of products, and a shorter list of actual vendors/manufacturers that I avoid. A year from now those lists might be different based on what I witness, discover, or even try. You might even find some of my examples on the web if you look hard enough. For me, it feels like it boils down to one thing: I am a dying breed of system administrator and technician. In today's technology/IT world, the mentality is that everything is expendable, everything will fail (and more importantly don't bother figuring out **why**, just replace it and pretend it didn't happen; solve nothing, bury head in ground). I feel like I'm one of those rare few who was taught skills based on a foundation of KISS principle and actually *solving problems* rather than saying "f-it" and accepting them as "technology is just flaky". I wasn't raised, educated, nor trained to accept that excuse. I guess that's why I consider myself a technophobe; I feel more and more like H.P. Lovecraft every time I have to deal with a bug in, well, anything. Anyway, that's all from me on the matter. I won't be replying past this point; there just isn't much for me to say. (Sorry, I've had a very, VERY long week...) -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 12:52:29 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 790731065707 for ; Fri, 20 Jan 2012 12:52:29 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id F1BAC8FC08 for ; Fri, 20 Jan 2012 12:52:28 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id q0KCIEgR078394 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Fri, 20 Jan 2012 14:18:20 +0200 (EET) (envelope-from daniel@digsys.bg) Message-ID: <4F195B86.2060504@digsys.bg> Date: Fri, 20 Jan 2012 14:18:14 +0200 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0) Gecko/20111228 Thunderbird/9.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <4F192ADA.5020903@brockmann-consult.de> <20120120090915.GA90876@icarus.home.lan> <4F19359E.6030901@brockmann-consult.de> <20120120112956.GA91803@icarus.home.lan> In-Reply-To: <20120120112956.GA91803@icarus.home.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 12:52:29 -0000 On 20.01.12 13:29, Jeremy Chadwick wrote: [...] > I feel like I'm one of those rare few who was taught skills based on a > foundation of KISS principle and actually *solving problems* rather > than saying "f-it" and accepting them as "technology is just flaky". [...] Sign.. I don't feel alone anymore Seriously, nowadays I get bored the instant my colleagues start looking at me strangely when told "don't waste your time to ever trying this".. and the fact, that people often get angry with you for showing them evidence that the concept/product/whatever they are so excited about is crap. Daniel PS: On the topic :-) I find the 'dump' Supermicro branded LSI2008 boards with IT firmware rock solid so far. Ever since I lost few 3ware arrays and spent several sleepless nights recovering precious data I don't trust any 'hardware RAID' solution anymore. But that is just me. From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 14:22:20 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BA53C1065675 for ; Fri, 20 Jan 2012 14:22:20 +0000 (UTC) (envelope-from freebsd@penx.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 7F47C8FC14 for ; Fri, 20 Jan 2012 14:22:20 +0000 (UTC) Received: from [IPv6:::1] (localhost [IPv6:::1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q0KEMBvP077369; Fri, 20 Jan 2012 06:22:12 -0800 (PST) (envelope-from freebsd@penx.com) From: Dennis Glatting To: Peter Maloney In-Reply-To: <4F192ADA.5020903@brockmann-consult.de> References: <4F192ADA.5020903@brockmann-consult.de> Content-Type: text/plain; charset="us-ascii" Date: Fri, 20 Jan 2012 06:22:11 -0800 Message-ID: <1327069331.29444.4.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q0KEMBvP077369 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@penx.com Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 14:22:20 -0000 On Fri, 2012-01-20 at 09:50 +0100, Peter Maloney wrote: > John, > > Various people have problems with mps and ZFS. > > I am using 8-STABLE from October 2011, and on the 9211-8i HBA, I am > using 9 IT firmware. In my case, it was the firmware on an SSD that > caused problems. > Crucial M4-CT256M4SSD2 firmware 0001 > Randomly it would fail. Trying to reproduce with heavy IO didn't work. > But I found that hot pulling works. Hot pulling the disk a few times > while mounted causes the disk to never respond again until rebooting. > (causing SCSI timeouts). When running "gpart recover da##" or > "camcontrol reset ..." on the disk after it is removed, the kernel > panics. The mpslsi driver does not solve the problem with the > CT256M4SSD2 and firmware 0001, but firmware 0009 seems to work. Trying > the 'lost disk' on another machine works. But FreeBSD needs to be > rebooted, maybe for some part of the hardware to reset and forget about > the disk. > > Sebulon with Samsung Spinpoint disks, here is a similar problem in this > thread: > http://forums.freebsd.org/showthread.php?t=27128 > And Beeblebrox, with different Samsung Spinpoint disks: > http://forums.freebsd.org/showthread.php?p=162201#post162201 > > And Jason Wolfe, with Seagate ST91000640SS disks (with mps): > http://osdir.com/ml/freebsd-scsi/2011-11/msg00006.html (freebsd-fs > list, with original post at 11/01/2011 07:13 PM CET) > But with mpslsi, the problems go away he says. I tried reproducing his > problem on my system (on my M4-CT256M4SSD2 0001 and my HDS5C3030ALA630), > and was able to get a timeout similar to his with mpslsi (one time out > of many tries), and it recovered gracefully, as he says his does. So > based on that, I would say mpslsi is the safest choice. Perhaps the same > problem on mps will cause a crash on any system with any disk, not just > ST91000640SS disks. > > I am using the following disks with no known problems: > Hitachi HUA723030ALA640 firmware MKAOA580 (tested with mps and > mpslsi, didn't test hot pull) > Seagate ST33000650NS firmware 0002 (tested with mps and mpslsi, > didn't test hot pull) > Hitachi HDS5C3030ALA630 firmware MEAOA580 (tested mostly with > mpslsi, and tested hot pull) > Crucial M4-CT256M4SSD2 firmware 0009 (tested only with mpslsi; not > fully tested yet, but passes the hot pull test; has a URE which it > didn't have with firmware 0001) > I am having a problem with Seagate ST1000DL002 disks but I haven't yet determined weather it is the disks themselves (they -- two of them, new -- fail under a MB controller too. > > The "hot pull test": > -------------- > dd if=/dev/random of=/somewhere/on/the/disk bs=128k > pull disk > wait 1 second > put disk back in > wait 1 second > pull disk > wait 1 second > put disk back in > wait 1 second > hit ctrl+c on the dd command > wait for messages to stop on tty1 / syslog. > gpart show > zpool status > zpool online > zpool status > > If gpart show does not seg fault, and zpool online causes the disk to > resilver, then it is all good. > > (40% of the time, the bad SSD passes the test if only pulled once, and > so far 0% if pulled twice, and one time out of all tests, the red lights > blink on all disks on the controller when the bad disk is pulled) > -------------- > > > So, I would say that with the right combination of hardware, you have a > fine system. So just test your disk however you think works best. If you > want to use mps, use the "smartctl -a" loop test to make sure it handles > it. If during the test you get no timeouts, I would call the test > indeterminate. A pass looks like what Jason Wolfe posted in the mailing > list (linked above) "SMID ... finished recovery after aborting TaskMID ...". > > > Peter > > > On 01/20/2012 01:08 AM, John Kozubik wrote: > > > > We're about to invest heavily in a new ZFS infrastructure, and our > > plans are to: > > > > > > - wait for 8.3, with the updated 6gbps mps driver > > > > - Install and use LSI 9211-8i cards with newest "IT" firmware > > > > > > This appears to be the de facto standard for ZFS HBAs ... > > > > Is there any reason to consider other cards/vendors ? > > > > Are these indeed considered solid (provided I use the new mps in 8.3) ? > > > > Thanks. > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > -- > > -------------------------------------------- > Peter Maloney > Brockmann Consult > Max-Planck-Str. 2 > 21502 Geesthacht > Germany > Tel: +49 4152 889 300 > Fax: +49 4152 889 333 > E-mail: peter.maloney@brockmann-consult.de > Internet: http://www.brockmann-consult.de > -------------------------------------------- > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 14:30:41 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 86712106566B for ; Fri, 20 Jan 2012 14:30:41 +0000 (UTC) (envelope-from dg17@penx.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 53E168FC0A for ; Fri, 20 Jan 2012 14:30:41 +0000 (UTC) Received: from [IPv6:::1] (localhost [IPv6:::1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q0KEUOpf077640; Fri, 20 Jan 2012 06:30:24 -0800 (PST) (envelope-from dg17@penx.com) From: Dennis Glatting To: Willem Jan Withagen In-Reply-To: <4F193D90.9020703@digiware.nl> References: <4F193D90.9020703@digiware.nl> Content-Type: text/plain; charset="us-ascii" Date: Fri, 20 Jan 2012 06:30:24 -0800 Message-ID: <1327069824.77378.3.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q0KEUOpf077640 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: dg17@penx.com Cc: fs@freebsd.org Subject: Re: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dg17@penx.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 14:30:41 -0000 On Fri, 2012-01-20 at 11:10 +0100, Willem Jan Withagen wrote: > CPU (4 core HT XEON@3Ghz) is more than powerful enough since the query > rate is low, but the amount of data is huge. (50Gb) > Memory (16G) could be better, but all slots are full. The server is > not > really swapping. > In a few cases I have used SSDs for swap. The nice thing about SSDs is I can sticky-tape them to the inside of a chassis. :) I haven't read whether this is a good idea or a bad idea but I haven't yet had any trouble. From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 14:52:00 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3EF3B1065670 for ; Fri, 20 Jan 2012 14:52:00 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.187]) by mx1.freebsd.org (Postfix) with ESMTP id C45D48FC23 for ; Fri, 20 Jan 2012 14:51:59 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis) id 0LzWOk-1Sk11D48zi-014sfF; Fri, 20 Jan 2012 15:51:58 +0100 Message-ID: <4F197F8D.7010404@brockmann-consult.de> Date: Fri, 20 Jan 2012 15:51:57 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110922 Thunderbird/3.1.15 MIME-Version: 1.0 To: Dennis Glatting References: <4F192ADA.5020903@brockmann-consult.de> <1327069331.29444.4.camel@btw.pki2.com> In-Reply-To: <1327069331.29444.4.camel@btw.pki2.com> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:iDyDeHYYaKLUEQ85i8ywxonIjPnA0fHsGdvHGgoVi01 olKHbqkcXj0BEZiSvVeRBF9r5NA7mjYMA4qnG5ghFAACotBozU DZfP9F6d22cylYgVj2Sj4qkt44o+qRA3vVsqvbIiXHdCHGOYNT Vl5icJcbTPP+16XuQ4jLFfA7Qk1M1G01bTbhe7hZdzBFP3YiIF jpM+OUaYHFwxixQ8mI0bwCRr+GmSAGijZwXCYXIxJNCQF07oi6 Vygd4wK/fzQRQtpGmtoK26thU1+ybpj2H6lPQ76dEaV8HCsk0r r8fSAtq8rkrnWtCHCzg0wWJV+N3yciCkcCjhyUe46Rq7Jc7HcX J9WXb4nXvVFxAPFSEICehfjrFlbbqZ+o2KGaGNdSf Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 14:52:00 -0000 On 01/20/2012 03:22 PM, Dennis Glatting wrote: > > I am having a problem with Seagate ST1000DL002 disks but I haven't yet > determined weather it is the disks themselves (they -- two of them, new > -- fail under a MB controller too. > I happen to have some ST2000DL003 disks on hand (same as yours, but 2TB instead of 1, and I don't know what firmware)... I could try my hot pull test with them to see what happens. What sort of failure is happening? Do you use a ZIL on a device other than an ST1000DL002? Please send output of smartctl -i (particularly interested in firmware version) -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de -------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 15:31:30 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EBEBC106564A for ; Fri, 20 Jan 2012 15:31:30 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta02.emeryville.ca.mail.comcast.net (qmta02.emeryville.ca.mail.comcast.net [76.96.30.24]) by mx1.freebsd.org (Postfix) with ESMTP id CBFD78FC1D for ; Fri, 20 Jan 2012 15:31:30 +0000 (UTC) Received: from omta23.emeryville.ca.mail.comcast.net ([76.96.30.90]) by qmta02.emeryville.ca.mail.comcast.net with comcast id PrJC1i0011wfjNsA2rXW1x; Fri, 20 Jan 2012 15:31:30 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta23.emeryville.ca.mail.comcast.net with comcast id PrXV1i00t1t3BNj8jrXV4a; Fri, 20 Jan 2012 15:31:29 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 16E1C102C19; Fri, 20 Jan 2012 07:31:29 -0800 (PST) Date: Fri, 20 Jan 2012 07:31:29 -0800 From: Jeremy Chadwick To: Dennis Glatting Message-ID: <20120120153129.GA97746@icarus.home.lan> References: <4F192ADA.5020903@brockmann-consult.de> <1327069331.29444.4.camel@btw.pki2.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1327069331.29444.4.camel@btw.pki2.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 15:31:31 -0000 On Fri, Jan 20, 2012 at 06:22:11AM -0800, Dennis Glatting wrote: > I am having a problem with Seagate ST1000DL002 disks but I haven't yet > determined weather it is the disks themselves (they -- two of them, new > -- fail under a MB controller too. Assuming the disks are seen directly on the bus (e.g. show up as daX, adaX, or whatever), please install ports/sysutils/smartmontools (make sure you're using version 5.42 or newer) and please provide output from the following command: "smartctl -a /dev/XXX" where XXX is the device name of the ST1000DL002 disk(s). Please be sure to state which device name is associated with which smartctl output. You can delete or remove the disk serial numbers from the output (for privacy) if you wish. I'll be happy to review the data and tell you whether or not the disks themselves are showing problems or if the issue is elsewhere. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 16:31:56 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F1EEC1065785 for ; Fri, 20 Jan 2012 16:31:56 +0000 (UTC) (envelope-from dg@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 05B6A8FC20 for ; Fri, 20 Jan 2012 16:31:54 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q0KGVY7a081124; Fri, 20 Jan 2012 08:31:34 -0800 (PST) (envelope-from dg@pki2.com) From: Dennis Glatting To: Jeremy Chadwick In-Reply-To: <20120120153129.GA97746@icarus.home.lan> References: <4F192ADA.5020903@brockmann-consult.de> <1327069331.29444.4.camel@btw.pki2.com> <20120120153129.GA97746@icarus.home.lan> Date: Fri, 20 Jan 2012 08:31:34 -0800 Message-ID: <1327077094.29408.11.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q0KGVY7a081124 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 16:31:57 -0000 On Fri, 2012-01-20 at 07:31 -0800, Jeremy Chadwick wrote: > On Fri, Jan 20, 2012 at 06:22:11AM -0800, Dennis Glatting wrote: > > I am having a problem with Seagate ST1000DL002 disks but I haven't yet > > determined weather it is the disks themselves (they -- two of them, new > > -- fail under a MB controller too. > > Assuming the disks are seen directly on the bus (e.g. show up as daX, > adaX, or whatever), please install ports/sysutils/smartmontools (make > sure you're using version 5.42 or newer) and please provide output from > the following command: "smartctl -a /dev/XXX" where XXX is the device > name of the ST1000DL002 disk(s). Please be sure to state which device > name is associated with which smartctl output. You can delete or > remove the disk serial numbers from the output (for privacy) if you > wish. I'll be happy to review the data and tell you whether or not the > disks themselves are showing problems or if the issue is elsewhere. > That is the motivation I needed to reboot that system, which was 50% through a task. That said, as remains the case today, for the last 20 years I haven't been able to find that "Any Key" on reboot. :) Regardless... The problematic disk from dmesg: da12 at mps2 bus 0 scbus4 target 5 lun 0 da12: Fixed Direct Access SCSI-6 device da12: 300.000MB/s transfers da12: Command Queueing enabled da12: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C) An attempt to write to it: bd3# dd if=/dev/zero of=/dev/da12 dd: /dev/da12: Input/output error 1+0 records in 0+0 records out 0 bytes transferred in 0.378153 secs (0 bytes/sec) The disk is presently connected to this device (LSI 9211-8i) but I have also had it connected to the devices on the MB and I think to a SuperMicro board. I have also tried a different LSI board. bd3# dmesg | grep mps2 mps2: port 0x8e00-0x8eff mem 0xfdcfc000-0xfdcfffff,0xfdc80000-0xfdcbffff irq 16 at device 0.0 on pci6 mps2: Firmware: 12.00.00.00 mps2: IOCCapabilities: 1285c The system is: bd3# uname -a FreeBSD bd3 9.0-STABLE FreeBSD 9.0-STABLE #1: Tue Jan 10 22:34:53 PST 2012 root@bd3:/sys/amd64/compile/BULLDOZER amd64 Full dmesg at the end. Note the errors with da12. And your request: bd3# smartctl -a /dev/da12 smartctl 5.42 2011-10-20 r3458 [FreeBSD 9.0-STABLE amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda Green (Adv. Format) Device Model: ST1000DL002-9TT153 Serial Number: W1V06SLR LU WWN Device Id: 5 000c50 037e11be9 Firmware Version: CC32 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Fri Jan 20 08:22:34 2012 PST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 612) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 169) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x30b7) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 108 099 006 Pre-fail Always - 241488 3 Spin_Up_Time 0x0003 087 070 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 28 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 136324 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 576 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 29 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 073 062 045 Old_age Always - 27 (Min/Max 21/27) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 23 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 29 194 Temperature_Celsius 0x0022 027 040 000 Old_age Always - 27 (0 21 0 0 0) 195 Hardware_ECC_Recovered 0x001a 027 008 000 Old_age Always - 241488 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 265544943010369 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 3746932548 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3212957483 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 309 - # 2 Short offline Completed without error 00% 285 - # 3 Short offline Completed without error 00% 261 - # 4 Extended offline Completed without error 00% 258 - # 5 Short offline Completed without error 00% 237 - # 6 Short offline Completed without error 00% 213 - # 7 Short offline Completed without error 00% 189 - # 8 Short offline Completed without error 00% 175 - # 9 Short offline Completed without error 00% 151 - #10 Short offline Completed without error 00% 127 - #11 Short offline Completed without error 00% 103 - #12 Short offline Completed without error 00% 79 - #13 Short offline Completed without error 00% 55 - #14 Short offline Completed without error 00% 31 - #15 Short offline Completed without error 00% 7 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. bd3# dmesg Copyright (c) 1992-2012 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.0-STABLE #1: Tue Jan 10 22:34:53 PST 2012 root@bd3:/sys/amd64/compile/BULLDOZER amd64 CPU: AMD FX(tm)-8150 Eight-Core Processor (4017.98-MHz K8-class CPU) Origin = "AuthenticAMD" Id = 0x600f12 Family = 15 Model = 1 Stepping = 2 Features=0x178bfbff Features2=0x1698220b AMD Features=0x2e500800 AMD Features2=0x1c9bfff,> TSC: P-state invariant, performance statistics real memory = 17179869184 (16384 MB) avail memory = 16507908096 (15743 MB) Event timer "LAPIC" quality 400 ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 8 CPUs FreeBSD/SMP: 1 package(s) x 8 core(s) cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 1 cpu2 (AP): APIC ID: 2 cpu3 (AP): APIC ID: 3 cpu4 (AP): APIC ID: 4 cpu5 (AP): APIC ID: 5 cpu6 (AP): APIC ID: 6 cpu7 (AP): APIC ID: 7 ioapic0: Changing APIC ID to 8 ioapic0 irqs 0-23 on motherboard kbd1 at kbdmux0 acpi0: on motherboard acpi0: Power Button (fixed) acpi0: reservation of 0, a0000 (3) failed acpi0: reservation of 100000, cfca0000 (3) failed Timecounter "ACPI-safe" frequency 3579545 Hz quality 850 acpi_timer0: <32-bit timer at 3.579545MHz> port 0x808-0x80b on acpi0 cpu0: on acpi0 cpu1: on acpi0 cpu2: on acpi0 cpu3: on acpi0 cpu4: on acpi0 cpu5: on acpi0 cpu6: on acpi0 cpu7: on acpi0 acpi_button0: on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pci0: on pcib0 pci0: at device 0.2 (no driver attached) pcib1: irq 19 at device 3.0 on pci0 pci1: on pcib1 mps0: port 0xde00-0xdeff mem 0xfd9fc000-0xfd9fffff,0xfd980000-0xfd9bffff irq 19 at device 0.0 on pci1 mps0: Firmware: 12.00.00.00 mps0: IOCCapabilities: 1285c pcib2: irq 16 at device 4.0 on pci0 pci2: on pcib2 em0: port 0xcf00-0xcf1f mem 0xfd7c0000-0xfd7dffff,0xfd700000-0xfd77ffff,0xfd7fc000-0xfd7fffff irq 16 at device 0.0 on pci2 em0: Using MSIX interrupts with 3 vectors em0: Ethernet address: 00:1b:21:c6:d2:a0 pcib3: irq 17 at device 9.0 on pci0 pci3: on pcib3 xhci0: mem 0xfd5f8000-0xfd5fffff irq 17 at device 0.0 on pci3 xhci0: 64 byte context size. usbus0 on xhci0 pcib4: irq 18 at device 10.0 on pci0 pci4: on pcib4 atapci0: port 0xaf00-0xaf07,0xae00-0xae03,0xad00-0xad07,0xac00-0xac03,0xab00-0xab0f mem 0xfcbff000-0xfcbff1ff irq 18 at device 0.0 on pci4 ata2: at channel 0 on atapci0 ata3: at channel 1 on atapci0 pcib5: irq 19 at device 11.0 on pci0 pci5: on pcib5 mps1: port 0x9e00-0x9eff mem 0xfdefc000-0xfdefffff,0xfde80000-0xfdebffff irq 19 at device 0.0 on pci5 mps1: Firmware: 12.00.00.00 mps1: IOCCapabilities: 185c pcib6: irq 16 at device 12.0 on pci0 pci6: on pcib6 mps2: port 0x8e00-0x8eff mem 0xfdcfc000-0xfdcfffff,0xfdc80000-0xfdcbffff irq 16 at device 0.0 on pci6 mps2: Firmware: 12.00.00.00 mps2: IOCCapabilities: 1285c pcib7: irq 17 at device 13.0 on pci0 pci7: on pcib7 vgapci0: port 0xee00-0xeeff mem 0xd0000000-0xdfffffff,0xfdac0000-0xfdadffff irq 17 at device 0.0 on pci7 hdac0: mem 0xfdafc000-0xfdafffff irq 18 at device 0.1 on pci7 ahci0: port 0xff00-0xff07,0xfe00-0xfe03,0xfd00-0xfd07,0xfc00-0xfc03,0xfb00-0xfb0f mem 0xfdfff000-0xfdfff3ff irq 19 at device 17.0 on pci0 ahci0: AHCI v1.20 with 4 6Gbps ports, Port Multiplier supported ahcich0: at channel 0 on ahci0 ahcich1: at channel 1 on ahci0 ahcich2: at channel 2 on ahci0 ahcich3: at channel 3 on ahci0 ohci0: mem 0xfdffe000-0xfdffefff irq 18 at device 18.0 on pci0 usbus1: on ohci0 ehci0: mem 0xfdffd000-0xfdffd0ff irq 17 at device 18.2 on pci0 usbus2: EHCI version 1.0 usbus2: on ehci0 ohci1: mem 0xfdffc000-0xfdffcfff irq 18 at device 19.0 on pci0 usbus3: on ohci1 ehci1: mem 0xfdffb000-0xfdffb0ff irq 17 at device 19.2 on pci0 usbus4: EHCI version 1.0 usbus4: on ehci1 pci0: at device 20.0 (no driver attached) atapci1: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xfa00-0xfa0f at device 20.1 on pci0 ata0: at channel 0 on atapci1 ata1: at channel 1 on atapci1 isab0: at device 20.3 on pci0 isa0: on isab0 pcib8: at device 20.4 on pci0 pci8: on pcib8 ohci2: mem 0xfdffa000-0xfdffafff irq 18 at device 20.5 on pci0 usbus5: on ohci2 pcib9: at device 21.0 on pci0 pci9: on pcib9 re0: port 0x6e00-0x6eff mem 0xfd0ff000-0xfd0fffff,0xfd0f8000-0xfd0fbfff irq 17 at device 0.0 on pci9 re0: Using 1 MSI-X message re0: Chip rev. 0x2c800000 re0: MAC rev. 0x00000000 miibus0: on re0 rgephy0: PHY 1 on miibus0 rgephy0: none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow re0: Ethernet address: 50:e5:49:45:55:8e pcib10: at device 21.1 on pci0 pci10: on pcib10 xhci1: mem 0xfcff8000-0xfcffffff irq 17 at device 0.0 on pci10 xhci1: 64 byte context size. usbus6 on xhci1 pcib11: at device 21.2 on pci0 pci11: on pcib11 atapci2: port 0x3f00-0x3f07,0x3e00-0x3e03,0x3d00-0x3d07,0x3c00-0x3c03,0x3b00-0x3b0f mem 0xfcdff000-0xfcdff1ff irq 17 at device 0.0 on pci11 ata4: at channel 0 on atapci2 ata5: at channel 1 on atapci2 ohci3: mem 0xfdff9000-0xfdff9fff irq 18 at device 22.0 on pci0 usbus7: on ohci3 ehci2: mem 0xfdff8000-0xfdff80ff irq 17 at device 22.2 on pci0 usbus8: EHCI version 1.0 usbus8: on ehci2 attimer0: port 0x40-0x43 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 hpet0: iomem 0xfed00000-0xfed003ff irq 0,8 on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 atrtc0: port 0x70-0x73 on acpi0 Event timer "RTC" frequency 32768 Hz quality 0 orm0: at iomem 0xc0000-0xcffff,0xd0000-0xd5fff,0xd6000-0xd6fff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 atkbdc0: at port 0x60,0x64 on isa0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] ppc0: cannot reserve I/O port range hwpstate0: on cpu0 Timecounters tick every 1.000 msec hdac0: HDA Codec #0: ATI R6xx HDMI pcm0: at cad 0 nid 1 on hdac0 usbus0: 5.0Gbps Super Speed USB v3.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 480Mbps High Speed USB v2.0 usbus3: 12Mbps Full Speed USB v1.0 usbus4: 480Mbps High Speed USB v2.0 usbus5: 12Mbps Full Speed USB v1.0 usbus6: 5.0Gbps Super Speed USB v3.0 usbus7: 12Mbps Full Speed USB v1.0 usbus8: 480Mbps High Speed USB v2.0 ugen0.1: <0x1b6f> at usbus0 uhub0: <0x1b6f XHCI root HUB, class 9/0, rev 3.00/1.00, addr 1> on usbus0 ugen1.1: at usbus1 uhub1: on usbus1 ugen2.1: at usbus2 uhub2: on usbus2 ugen3.1: at usbus3 uhub3: on usbus3 ugen4.1: at usbus4 uhub4: on usbus4 ugen5.1: at usbus5 uhub5: on usbus5 ugen6.1: <0x1b6f> at usbus6 uhub6: <0x1b6f XHCI root HUB, class 9/0, rev 3.00/1.00, addr 1> on usbus6 ugen7.1: at usbus7 uhub7: on usbus7 ugen8.1: at usbus8 uhub8: on usbus8 uhub5: 2 ports with 2 removable, self powered uhub7: 4 ports with 4 removable, self powered uhub1: 5 ports with 5 removable, self powered uhub3: 5 ports with 5 removable, self powered uhub0: 4 ports with 4 removable, self powered uhub6: 4 ports with 4 removable, self powered uhub8: 4 ports with 4 removable, self powered uhub2: 5 ports with 5 removable, self powered uhub4: 5 ports with 5 removable, self powered ugen1.2: at usbus1 ukbd0: on usbus1 kbd2 at ukbd0 uhid0: on usbus1 ada0 at ahcich0 bus 0 scbus5 target 0 lun 0 ada0: ATA-8 SATA 3.x device ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 85857MB (175836528 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad8 ada1 at ahcich1 bus 0 scbus6 target 0 lun 0 ada1: ATA-8 SATA 2.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 57241MB (117231408 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad10 ada2 at ahcich2 bus 0 scbus7 target 0 lun 0 ada2: ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad12 da12 at mps2 bus 0 scbus4 target 5 lun 0 da12: Fixed Direct Access SCSI-6 device da12: 300.000MB/s transfers da12: Command Queueing enabled da12: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C) SMP: AP CPU #2 Launched! da7 at mps0 bus 0 scbus0 target 7 lun 0 da7: Fixed Direct Access SCSI-6 device da7: 300.000MB/s transfers da7: Command Queueing enabled da7: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da3 at mps0 bus 0 scbus0 target 3 lun 0 da3: Fixed Direct Access SCSI-6 device da3: 300.000MB/s transfers da3: Command Queueing enabled da3: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da0 at mps0 bus 0 scbus0 target 0 lun 0 da0: Fixed Direct Access SCSI-6 device da0: 300.000MB/s transfers da0: Command Queueing enabled da0: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) SMP: AP CPU #4 Launched! da6 at mps0 bus 0 scbus0 target 6 lun 0 da6: Fixed Direct Access SCSI-6 device da6: 300.000MB/s transfers da6: Command Queueing enabled da6: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da5 at mps0 bus 0 scbus0 target 5 lun 0 da5: Fixed Direct Access SCSI-6 device da5: 300.000MB/s transfers da5: Command Queueing enabled da5: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da4 at mps0 bus 0 scbus0 target 4 lun 0 da4: Fixed Direct Access SCSI-6 device da4: 300.000MB/s transfers da4: Command Queueing enabled da4: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) SMP: AP CPU #6 Launched! da9 at mps2 bus 0 scbus4 target 1 lun 0 da9: Fixed Direct Access SCSI-6 device da9: 300.000MB/s transfers da9: Command Queueing enabled da9: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da10 at mps2 bus 0 scbus4 target 2 lun 0 da10: Fixed Direct Access SCSI-6 device da10: 300.000MB/s transfers da10: Command Queueing enabled da10: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da8 at mps2 bus 0 scbus4 target 0 lun 0 da8: Fixed Direct Access SCSI-6 device da8: 300.000MB/s transfers da8: Command Queueing enabled da8: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) SMP: AP CPU #1 Launched! da11 at mps2 bus 0 scbus4 target 3 lun 0 da11: Fixed Direct Access SCSI-6 device da11: 300.000MB/s transfers da11: Command Queueing enabled da11: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da1 at mps0 bus 0 scbus0 target 1 lun 0 da1: Fixed Direct Access SCSI-6 device da1: 300.000MB/s transfers da1: Command Queueing enabled da1: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) da2 at mps0 bus 0 scbus0 target 2 lun 0 da2: Fixed Direct Access SCSI-6 device da2: 300.000MB/s transfers da2: Command Queueing enabled da2: 2384658MB (4883781168 512 byte sectors: 255H 63S/T 304001C) SMP: AP CPU #3 Launched! SMP: AP CPU #5 Launched! SMP: AP CPU #7 Launched! Timecounter "TSC-low" frequency 15695219 Hz quality 1000 (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) Trying to mount root from ufs:/dev/gpt/disk0 [rw]... ZFS filesystem version 5 ZFS storage pool version 28 (da12:mps2:0:5:0): WRITE(6). CDB: a 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): WRITE(6). CDB: a 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): WRITE(6). CDB: a 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): WRITE(6). CDB: a 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): WRITE(6). CDB: a 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 2 0 10 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 40 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 2 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 10 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 80 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 0 1 0 (da12:mps2:0:5:0): CAM status: SCSI Status Error (da12:mps2:0:5:0): SCSI status: Check Condition (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) -- Dennis Glatting From owner-freebsd-fs@FreeBSD.ORG Fri Jan 20 18:18:30 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E0AFA106566C for ; Fri, 20 Jan 2012 18:18:30 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta11.emeryville.ca.mail.comcast.net (qmta11.emeryville.ca.mail.comcast.net [76.96.27.211]) by mx1.freebsd.org (Postfix) with ESMTP id BFA828FC12 for ; Fri, 20 Jan 2012 18:18:30 +0000 (UTC) Received: from omta19.emeryville.ca.mail.comcast.net ([76.96.30.76]) by qmta11.emeryville.ca.mail.comcast.net with comcast id PrJE1i0011eYJf8ABuJWGS; Fri, 20 Jan 2012 18:18:30 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta19.emeryville.ca.mail.comcast.net with comcast id PuJV1i00C1t3BNj01uJVFw; Fri, 20 Jan 2012 18:18:30 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 073B1102C19; Fri, 20 Jan 2012 10:18:29 -0800 (PST) Date: Fri, 20 Jan 2012 10:18:29 -0800 From: Jeremy Chadwick To: Dennis Glatting Message-ID: <20120120181828.GA1049@icarus.home.lan> References: <4F192ADA.5020903@brockmann-consult.de> <1327069331.29444.4.camel@btw.pki2.com> <20120120153129.GA97746@icarus.home.lan> <1327077094.29408.11.camel@btw.pki2.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1327077094.29408.11.camel@btw.pki2.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Jan 2012 18:18:31 -0000 On Fri, Jan 20, 2012 at 08:31:34AM -0800, Dennis Glatting wrote: > On Fri, 2012-01-20 at 07:31 -0800, Jeremy Chadwick wrote: > > > On Fri, Jan 20, 2012 at 06:22:11AM -0800, Dennis Glatting wrote: > > > I am having a problem with Seagate ST1000DL002 disks but I haven't yet > > > determined weather it is the disks themselves (they -- two of them, new > > > -- fail under a MB controller too. > > > > Assuming the disks are seen directly on the bus (e.g. show up as daX, > > adaX, or whatever), please install ports/sysutils/smartmontools (make > > sure you're using version 5.42 or newer) and please provide output from > > the following command: "smartctl -a /dev/XXX" where XXX is the device > > name of the ST1000DL002 disk(s). Please be sure to state which device > > name is associated with which smartctl output. You can delete or > > remove the disk serial numbers from the output (for privacy) if you > > wish. I'll be happy to review the data and tell you whether or not the > > disks themselves are showing problems or if the issue is elsewhere. > > That is the motivation I needed to reboot that system, which was 50% > through a task. That said, as remains the case today, for the last 20 > years I haven't been able to find that "Any Key" on reboot. :) > > Regardless... First off, let's start with the full picture. Readers need to know exactly what is going on within your controller setup, what disks are connected to what, etc.. Taken from your full dmesg below, and turned into something easy-to-read (mostly) Controller mps0 --> LSI SAS2008 --> IRQ 19 on pci1 --> Firmware 12.00.00.00 --> Disks attached: --> da0 --> WDC WD25EZRS, SATA300 --> da1 --> WDC WD25EZRS, SATA300 --> da2 --> WDC WD25EZRS, SATA300 --> da3 --> WDC WD25EZRS, SATA300 --> da4 --> WDC WD25EZRS, SATA300 --> da5 --> WDC WD25EZRS, SATA300 --> da6 --> WDC WD25EZRS, SATA300 --> da7 --> WDC WD25EZRS, SATA300 Controller mps1 --> LSI SAS2008 --> IRQ 19 on pci5 --> Firmware 12.00.00.00 --> Disks attached: --> None Controller mps2 --> LSI SAS2008 --> IRQ 16 on pci6 --> Firmware 12.00.00.00 --> Disks attached: --> da8 --> WDC WD25EZRS, SATA300 --> da9 --> WDC WD25EZRS, SATA300 --> da10 --> WDC WD25EZRS, SATA300 --> da11 --> WDC WD25EZRS, SATA300 --> da12 --> ST1000DL002, SATA300 Controller ahci0 --> ATI IXP700 AHCI (4-port) --> IRQ 19 on pci0 --> Disks attached: --> ahcich0 --> ada0 --> Corsair Force 3 SSD, SATA600 --> ahcich1 --> ada1 --> OCZ-AGILITY2 SSD, SATA300 --> ahcich2 --> ada2 --> ST31000333AS, SATA300 Controller ata0 --> ATI IXP700/800 ATA133 (2-port/4-device, PATA) --> IRQ on pci0 --> I would assume this would be on IRQ 14 or 15, sigh... --> Disks attached: --> None Now that we have a full picture, let's continue. > An attempt to write to it: > > bd3# dd if=/dev/zero of=/dev/da12 > dd: /dev/da12: Input/output error > 1+0 records in > 0+0 records out > 0 bytes transferred in 0.378153 secs (0 bytes/sec) The dd command you executed to write zeros to the disk, 512-bytes at time, starting at LBA 0, failed when writing the first 512 bytes. So, from my perspective, writing to LBA 0 is failing. You should also keep in mind that this dd command to zero the disk (if it was to work) would take a very long time to complete. If you used a larger block size (bs=64k or maybe larger), it would be a lot faster. Just a tip. Starting with bs=512 (default) is fine, or in this case using 4096 would probably be better (see below), but whatever. > The disk is presently connected to this device (LSI 9211-8i) but I have > also had it connected to the devices on the MB and I think to a > SuperMicro board. I have also tried a different LSI board. Thanks for sharing this -- this is important information, but let's not start moving the drive around any more, okay? There's no point. The information you've given is enough, and I'll explain it in detail. > {snipping for brevity} > > bd3# smartctl -a /dev/da12 > smartctl 5.42 2011-10-20 r3458 [FreeBSD 9.0-STABLE amd64] (local build) > Copyright (C) 2002-11 by Bruce Allen, > http://smartmontools.sourceforge.net > > === START OF INFORMATION SECTION === > Model Family: Seagate Barracuda Green (Adv. Format) > Device Model: ST1000DL002-9TT153 > Serial Number: W1V06SLR > LU WWN Device Id: 5 000c50 037e11be9 > Firmware Version: CC32 > User Capacity: 1,000,204,886,016 bytes [1.00 TB] > Sector Size: 512 bytes logical/physical > Device is: In smartctl database [for details use: -P show] > ATA Version is: 8 > ATA Standard is: ATA-8-ACS revision 4 > Local Time is: Fri Jan 20 08:22:34 2012 PST > SMART support is: Available - device has SMART capability. > SMART support is: Enabled > > {snipping for brevity} > > SMART Attributes Data Structure revision number: 10 > Vendor Specific SMART Attributes with Thresholds: > ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE > 1 Raw_Read_Error_Rate 0x000f 108 099 006 Pre-fail Always - 241488 > 3 Spin_Up_Time 0x0003 087 070 000 Pre-fail Always - 0 > 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 28 > 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 > 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 136324 > 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 576 > 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 > 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 29 > 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 > 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 > 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 > 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 > 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 > 190 Airflow_Temperature_Cel 0x0022 073 062 045 Old_age Always - 27 (Min/Max 21/27) > 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 > 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 23 > 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 29 > 194 Temperature_Celsius 0x0022 027 040 000 Old_age Always - 27 (0 21 0 0 0) > 195 Hardware_ECC_Recovered 0x001a 027 008 000 Old_age Always - 241488 > 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 > 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 > 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 > 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 265544943010369 > 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 3746932548 > 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3212957483 > > SMART Error Log Version: 1 > No Errors Logged > > {snipping more} Your SMART attributes here appear perfectly fine. There is no indication of bad LBAs (sectors) on the drive, or even "suspect" LBAs on the drive. If LBA 0, for example, was actually bad (meaning the sector itself), that would show up in the SMART error log (most likely), and if not there, at bare minimum as some form of incremented RAW_VALUE field in one of many attributes (either 5, 197, or 198; possibly 187, I forget). SMART attributes 1, 7, and 195 on Seagate drives are always "crazy"; that is to say, they are not incremental counters, they are vendor-encoded. smartmontools does not know how to decode some of these attributes (on SOME Seagate drives it does, on others it doesn't). I state this because people read SMART attributes wrong ~70% of the time; they see non-zero numbers and go "oh my god, it's broken!" No it isn't. SMART attribute values/decoding are not part of the ATA specification (even working draft), so it's all proprietary more or less. I also want to assume attribute 240 is vendor-encoded as well, probably as multiple data sets stored within the full 6-byte attribute field; again, smartmontools doesn't know how to decode this. I wouldn't worry about this, again even though the number is huge. :-) SMART attribute 184 keeps track of errors occurring between the drive controller (on the PCB) and the drive cache; there are no cache errors. That's good, and I'm glad to see vendors implementing this. SMART attribute 188 indicates the drive itself has not counted any command timeouts (these would be ATA commands sent from the OS through the SATA/SAS controller to the drive controller, which timed out at the phase when the drive attempted to read/write data from a sector). SMART attribute 199 indicates there are no cabling problems or "physical issues between the disk and the SATA/SAS controller" (bad connectors, dust in the connectors, shoddy hot-swap plane, bad port, etc.). SMART attribute 183 is something I haven't seen before (I'm more familiar with Western Digital disks), but it also looks fine. So again: your drive looks perfectly healthy per SMART stats. But there's something amusing about this situation that a lot of people overlook... > {snipping dmesg for brevity, but here's the URL for readers so they > can see it themselves: > http://lists.freebsd.org/pipermail/freebsd-fs/2012-January/013481.html > } > > {simplify the SCSI errors shown} > > (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 > (da12:mps2:0:5:0): CAM status: SCSI Status Error > (da12:mps2:0:5:0): SCSI status: Check Condition > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) > (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) > (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 > (da12:mps2:0:5:0): CAM status: SCSI Status Error > (da12:mps2:0:5:0): SCSI status: Check Condition > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) > (da12:mps2:0:5:0): WRITE(6). CDB: a 0 0 0 1 0 > (da12:mps2:0:5:0): CAM status: SCSI Status Error > (da12:mps2:0:5:0): SCSI status: Check Condition > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) Based on this, we know the following: - The da12 disk is doing something weird when it comes to reads AND writes. - The da12 disk is not timing out; it receives an immediate error on reads and writes (coming back from the controller; whether or not the ATA command block makes it to the disk is unknown, but I have to assume it does). - The da12 disk, at one time, was working/usable as indicated by some SMART attributes. - The da12 disk is the only ST1000DL002 disk in the system. - The da12 disk is on the same controller as 4 other disks. - The da8 through da11 disks (WD25EZRS) on the mps2 controller are performing fine with no issues (I have to assume this). - The ST1000DL002 disk is an Advanced Format disk (4096-byte sectors). - All the WD25EZRS disks are Advanced Format disks (4096-byte sectors). - The ST1000DL002 disk behaves badly when used on the on-board AHCI controller as well as a completely different motherboard (presumably). Here's the fun part: ATA commands being submit from the OS to the disk (specifically the controller on the disk itself) are working fine. SMART attributes are obtained via an ATA command that, internally on mechanical drives, fetches data from the HPA (Host Protected Area) region of the drive (see Wikipedia if you don't know about this), and returns that data. AFAIK this data is not cached in any way, it's almost always read straight from the HPA. So this means we know I/O communication between the OS and controller, and the controller and the disk, works fine. And we also know, at least with regards to the HPA region, that the heads can read data from the HPA region successfully. Great. Could this be a controller problem (e.g. a firmware bug that affects compatibility with ST1000DL002 drives)? I'm about 95% certain the answer is no. The reason is that the ST1000DL002 drive behaved the same when put on other controllers. What all this means is that the drive, in effect, refuses to read data from non-HPA regions of the disk -- that means LBA 0 to . Why or how could this happen? Unknown, because there's a *ton* of possibilities -- way more than I care to speculate. :-) Have I seen this problem before? Yes -- many times, but only once with a SATA drive: - I see this on rare occasion with Fujitsu SCSI disks at my workplace, where the drives flat out refuse to do I/O any longer. However, these return a vendor-specific ASC + ASCQ that indicate the drive is in a "locked" or "frozen" state, requiring Fujitsu to investigate. I've seen it happen a good 10, maybe 20 times over the past few years on drives manufactured from 2001 to 2007. Thankfully Fujitsu provides full docs on their SCSI drives so I was able to look up the ASC/ASCQ and figure out it was an internal drive failure. We disposed of the disks properly/securely. - In the SATA case, the end-user's drive behaved the same as yours. I do not remember what brand (it really doesn't matter though). In their case, however, the HPA region was corrupt; the drive spit out weird errors during SMART attribute fetch, and those attributes which it did fetch were *completely* garbled. My guess was a bad HPA region of the drive, combined with either a firmware bug or something mechanical or head problems. The end-user RMA'd the drive and the replacement worked fine. My advice at this point (#1 is optional): 1. If you're curious and just interested in learning: put the ST1000DL002 disk on a system where it's the only disk, and hooked directly to the motherboard (and not in AHCI mode), and boot SeaTools from a CD or USB stick. I'm willing to bet you get back an error code on the quick/short test (which does more than just a SMART short test). If that does pass, try doing a long test (which reads all the LBAs on the drive). I'll be very, VERY surprised if that passes. 2. File an RMA with Seagate. The simple version is that all LBA I/O (standard read/write) is being rejected by the drive for unknown reasons. Good luck, and hope this sheds some light on the "fun" (or not so fun) world of hard disk troubleshooting. And don't ask me to troubleshoot an SSD. ;-) -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 01:31:27 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DD52C106564A for ; Sat, 21 Jan 2012 01:31:27 +0000 (UTC) (envelope-from claudius_herder@ambtec.de) Received: from server.ambtec.de (server.ambtec.de [IPv6:2a01:4f8:131:1381::2]) by mx1.freebsd.org (Postfix) with ESMTP id 74F5C8FC13 for ; Sat, 21 Jan 2012 01:31:26 +0000 (UTC) Received: from server.ambtec.de (localhost [127.0.0.1]) by server.ambtec.de (Postfix) with ESMTP id 92BAEE9F3 for ; Sat, 21 Jan 2012 02:31:25 +0100 (CET) X-Virus-Scanned: by amavisd-new using ClamAV at ambtec.de Received: from server.ambtec.de ([127.0.0.1]) by server.ambtec.de (server.ambtec.de [127.0.0.1]) (amavisd-new, port 10024) with LMTP id DdnctPLG8I1W for ; Sat, 21 Jan 2012 02:31:15 +0100 (CET) Received: from [192.168.0.101] (e176004018.adsl.alicedsl.de [85.176.4.18]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by server.ambtec.de (Postfix) with ESMTPSA id 4B1E5E9E7 for ; Sat, 21 Jan 2012 02:31:15 +0100 (CET) Message-ID: <4F1A1564.4080003@ambtec.de> Date: Sat, 21 Jan 2012 02:31:16 +0100 From: Claudius Herder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:9.0) Gecko/20120114 Thunderbird/9.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org X-Enigmail-Version: 1.3.4 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enig06918E1878A4A49BFC2E5262" Subject: zfs arc eating up all memory X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 01:31:27 -0000 This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig06918E1878A4A49BFC2E5262 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi all, I kind of accidentialy did 'du -sh' on my homedir and forgot that I had set snapdir=3Dvisible. After some minutes wired memory growed from 2g to 7.5g, active memory dropped from 700m to 20m and heavy swapping occured. I was not able to reproduce this behavior in my test vm (i386 vbox) and testing on the server is difficult, because I have only ssh access and if I do not kill 'du' in time, I can only hard reset the system. If I set snapdir=3Dhidden there is no problem, even if i run 'du -sh /' Reducing vfs.zfs.arc_max to 2048m did not solve the issue. Any ideas/hints to solve or workaround this problem? -- Claudius There is not much data in my home, most of it is email. #find /usr/home/claudius/.zfs |wc -l 4280597 #zfs list -t snapshot -H |grep claudius |wc -l 99 # zfs list -o space zpool/usr/home/claudius NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD zpool/usr/home/claudius 655G 1.43G 203M 1.23G 0 0 zfs-stats -a ------------------------------------------------------------------------ ZFS Subsystem Report Sat Jan 21 01:00:30 2012 ------------------------------------------------------------------------ System Information: Kernel Version: 900044 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 ZFS Storage pool Version: 28 ZFS Filesystem Version: 5 FreeBSD 9.0-RELEASE #1: Fri Jan 20 22:23:20 CET 2012 claudius 1:00AM up 2:35, 4 users, load averages: 0.67, 0.46, 0.31 ------------------------------------------------------------------------ System Memory: 0.69% 54.34 MiB Active, 2.79% 220.35 MiB Inact 93.68% 7.23 GiB Wired, 1.72% 135.54 MiB Cache 1.11% 88.04 MiB Free, 0.01% 1.10 MiB Gap Real Installed: 8.00 GiB Real Available: 99.76% 7.98 GiB Real Managed: 96.69% 7.72 GiB Logical Total: 8.00 GiB Logical Used: 94.58% 7.57 GiB Logical Free: 5.42% 443.92 MiB Kernel Memory: 3.81 GiB Data: 99.72% 3.80 GiB Text: 0.28% 10.77 MiB Kernel Memory Map: 7.69 GiB Size: 27.57% 2.12 GiB Free: 72.43% 5.57 GiB ------------------------------------------------------------------------ ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 132.60k Recycle Misses: 564.40k Mutex Misses: 84 Evict Skips: 84 ARC Size: 262.61% 5.25 GiB Target Size: (Adaptive) 12.50% 256.00 MiB Min Size (Hard Limit): 12.50% 256.00 MiB Max Size (High Water): 8:1 2.00 GiB ARC Size Breakdown: Recently Used Cache Size: 0.30% 16.03 MiB Frequently Used Cache Size: 99.70% 5.24 GiB ARC Hash Breakdown: Elements Max: 46.85k Elements Current: 39.41% 18.46k Collisions: 58.11k Chain Max: 6 Chains: 1.16k ------------------------------------------------------------------------ ARC Efficiency: 12.26m Cache Hit Ratio: 94.04% 11.52m Cache Miss Ratio: 5.96% 730.73k Actual Hit Ratio: 91.06% 11.16m Data Demand Efficiency: 98.42% 3.12m Data Prefetch Efficiency: 25.99% 17.37k CACHE HITS BY CACHE LIST: Most Recently Used: 2.63% 302.61k Most Frequently Used: 94.21% 10.86m Most Recently Used Ghost: 2.33% 268.62k Most Frequently Used Ghost: 3.59% 414.20k CACHE HITS BY DATA TYPE: Demand Data: 26.65% 3.07m Prefetch Data: 0.04% 4.51k Demand Metadata: 70.16% 8.09m Prefetch Metadata: 3.15% 362.82k CACHE MISSES BY DATA TYPE: Demand Data: 6.73% 49.20k Prefetch Data: 1.76% 12.86k Demand Metadata: 56.37% 411.90k Prefetch Metadata: 35.14% 256.77k ------------------------------------------------------------------------ L2ARC is disabled ------------------------------------------------------------------------ File-Level Prefetch: (HEALTHY) DMU Efficiency: 19.90m Hit Ratio: 61.43% 12.23m Miss Ratio: 38.57% 7.68m Colinear: 7.68m Hit Ratio: 0.02% 1.71k Miss Ratio: 99.98% 7.67m Stride: 12.06m Hit Ratio: 99.88% 12.05m Miss Ratio: 0.12% 14.05k DMU Misc: Reclaim: 7.67m Successes: 0.38% 28.93k Failures: 99.62% 7.65m Streams: 175.13k +Resets: 0.21% 376 -Resets: 99.79% 174.76k Bogus: 0 ------------------------------------------------------------------------ VDEV cache is disabled ------------------------------------------------------------------------ ZFS Tunables (sysctl): kern.maxusers 384 vm.kmem_size 8285048832 vm.kmem_size_scale 1 vm.kmem_size_min 0 vm.kmem_size_max 329853485875 vfs.zfs.l2c_only_size 0 vfs.zfs.mfu_ghost_data_lsize 1726464 vfs.zfs.mfu_ghost_metadata_lsize 34827776 vfs.zfs.mfu_ghost_size 36554240 vfs.zfs.mfu_data_lsize 18432 vfs.zfs.mfu_metadata_lsize 93696 vfs.zfs.mfu_size 2208450048 vfs.zfs.mru_ghost_data_lsize 78406144 vfs.zfs.mru_ghost_metadata_lsize 152711168 vfs.zfs.mru_ghost_size 231117312 vfs.zfs.mru_data_lsize 262144 vfs.zfs.mru_metadata_lsize 606208 vfs.zfs.mru_size 33792000 vfs.zfs.anon_data_lsize 0 vfs.zfs.anon_metadata_lsize 0 vfs.zfs.anon_size 1862144 vfs.zfs.l2arc_norw 1 vfs.zfs.l2arc_feed_again 1 vfs.zfs.l2arc_noprefetch 1 vfs.zfs.l2arc_feed_min_ms 200 vfs.zfs.l2arc_feed_secs 1 vfs.zfs.l2arc_headroom 2 vfs.zfs.l2arc_write_boost 8388608 vfs.zfs.l2arc_write_max 8388608 vfs.zfs.arc_meta_limit 536870912 vfs.zfs.arc_meta_used 5638610032 vfs.zfs.arc_min 268435456 vfs.zfs.arc_max 2147483648 vfs.zfs.dedup.prefetch 1 vfs.zfs.mdcomp_disable 0 vfs.zfs.write_limit_override 0 vfs.zfs.write_limit_inflated 25707245568 vfs.zfs.write_limit_max 1071135232 vfs.zfs.write_limit_min 33554432 vfs.zfs.write_limit_shift 3 vfs.zfs.no_write_throttle 0 vfs.zfs.zfetch.array_rd_sz 1048576 vfs.zfs.zfetch.block_cap 256 vfs.zfs.zfetch.min_sec_reap 2 vfs.zfs.zfetch.max_streams 8 vfs.zfs.prefetch_disable 0 vfs.zfs.mg_alloc_failures 12 vfs.zfs.check_hostid 1 vfs.zfs.recover 0 vfs.zfs.txg.synctime_ms 1000 vfs.zfs.txg.timeout 5 vfs.zfs.scrub_limit 10 vfs.zfs.vdev.cache.bshift 16 vfs.zfs.vdev.cache.size 0 vfs.zfs.vdev.cache.max 16384 vfs.zfs.vdev.write_gap_limit 4096 vfs.zfs.vdev.read_gap_limit 32768 vfs.zfs.vdev.aggregation_limit 131072 vfs.zfs.vdev.ramp_rate 2 vfs.zfs.vdev.time_shift 6 vfs.zfs.vdev.min_pending 4 vfs.zfs.vdev.max_pending 10 vfs.zfs.vdev.bio_flush_disable 0 vfs.zfs.cache_flush_disable 0 vfs.zfs.zil_replay_disable 0 vfs.zfs.zio.use_uma 0 vfs.zfs.version.zpl 5 vfs.zfs.version.spa 28 vfs.zfs.version.acl 1 vfs.zfs.debug 0 vfs.zfs.super_owner 0 ------------------------------------------------------------------------ /boot/loader.conf zfs_load=3D"YES" hw.usb.no_boot_wait=3D1 hw.usb.no_pf=3D1 vfs.zfs.arc_max=3D"2048M" Type InUse MemUse HighUse Requests Size(s) cred 576 90K - 899306 64,256 uidinfo 20 5K - 3909 128,2048 plimit 58 15K - 8019 256 acpidev 48 3K - 48 64 GEOM 134 23K - 792 16,32,64,128,256,512,1024,40= 96 sysctltmp 0 0K - 7422 16,32,64,128,256 sysctloid 5542 273K - 5696 16,32,64,128 sysctl 0 0K - 67333 16,32,64 tidhash 1 16K - 1 callout 7 3584K - 7 umtx 2340 293K - 2340 128 p1003.1b 1 1K - 1 16 SWAP 2 549K - 2 64 bus-sc 125 367K - 1014 16,32,128,256,512,2048 bus 731 78K - 4254 16,32,64,128,256,1024,2048 devstat 10 21K - 10 32,4096 eventhandler 74 6K - 74 64,128 acpica 4988 537K - 63439 16,32,64,128,256,512,1024,20= 48 acpitask 1 2K - 1 2048 kobj 81 324K - 139 4096 Per-cpu 1 1K - 1 32 entropy 1024 64K - 1024 64 pci_link 16 2K - 16 16,64,128 rman 224 26K - 630 16,32,128 acpi_perf 8 2K - 8 256 sbuf 1 1K - 1778 16,32,64,128,256,512,1024,2048,4096 CAM dev queue 7 1K - 7 128 acpisem 17 3K - 17 128 stack 0 0K - 4 256 taskqueue 81 8K - 111 16,32,64,128,1024 Unitno 9 1K - 146303 32,64 iov 0 0K - 106876 16,64,128,256,512 select 375 47K - 375 128 ioctlops 0 0K - 241721 16,32,64,128,256,512,1024,2048,4096 msg 4 30K - 4 2048,4096 sem 4 70K - 4 2048,4096 shm 27 72K - 2893 2048 tty 21 21K - 25 1024,2048 pts 4 1K - 4 256 accf 2 1K - 2 64 mbuf_tag 0 0K - 1078495 32,64,128 shmfd 1 8K - 1 CAM queue 25 1K - 87 16,32 pcb 55 158K - 3756 16,32,128,1024,2048,4096 soname 80 10K - 140021 16,32,128 acl 0 0K - 5468 4096 vfscache 1 2048K - 1 vfs_hash 1 1024K - 1 vnodes 4 1K - 4 64,256 USBdev 32 10K - 32 64,128,1024 vnodemarker 0 0K - 11404 512 mount 591 22K - 1634 16,32,64,128,256 BPF 8 1026K - 10 16,128,512 ether_multi 48 3K - 54 16,32,64 ifaddr 66 19K - 66 32,64,128,256,512,2048,4096 ifnet 5 9K - 5 128,2048 USB 56 19K - 56 16,32,128,2048 clone 6 24K - 6 4096 arpcom 1 1K - 1 16 lltable 17 7K - 17 256,512 CAM SIM 7 2K - 7 256 tun 1 1K - 1 256 mirror_data 3 1K - 27 64,128,512 isadev 7 1K - 7 128 routetbl 55 8K - 54541 32,64,128,256,512 igmp 4 1K - 4 256 DEVFS1 108 54K - 114 512 DEVFS3 130 33K - 133 256 cdev 8 2K - 8 256 in_multi 3 1K - 3 256 sctp_iter 0 0K - 5 256 sctp_ifn 3 1K - 3 128 sctp_ifa 7 1K - 7 128 sctp_vrf 1 1K - 1 64 sctp_a_it 0 0K - 5 16 hostcache 1 28K - 1 syncache 1 96K - 1 in6_multi 33 4K - 33 32,256 sigio 1 1K - 1 64 mld 4 1K - 4 128 crypto 18 10K - 139772 64,256,512,1024 xform 0 0K - 6696 16,32 audit_evclass 179 6K - 218 32 vm_pgdata 2 129K - 2 128 UMAHash 2 2K - 3 512,1024 filedesc 309 287K - 56100 16,32,64,128,512,1024,2048,4= 096 kenv 111 13K - 125 16,32,64,128 kqueue 114 129K - 5846 256,512,2048 proc-args 128 8K - 38829 16,32,64,128,256 hhook 2 1K - 2 128 DEVFS 25 1K - 26 16,128 ithread 88 14K - 88 32,128,256 memdesc 1 4K - 1 4096 DEVFSP 1 1K - 1 64 atkbddev 2 1K - 2 64 KTRACE 100 13K - 100 128 kbdmux 6 18K - 6 16,512,1024,2048 linker 61 7K - 71 16,32,64,128,512 lockf 212 19K - 157298 64,128,256 loginclass 3 1K - 485 64 ip6ndp 8 1K - 11 64,128 apmdev 1 1K - 1 128 ip6opt 0 0K - 3 32 temp 56 12K - 1456852 16,32,64,128,256,512,1024,2048,4096 devbuf 6995 19874K - 7014 16,32,64,128,256,512,1024,2048,4096 CAM periph 6 2K - 18 16,32,64,256 module 149 19K - 149 128 CAM XPT 52 33K - 584 32,64,128,1024,2048 qpidrv 1 1K - 1 16 mtx_pool 2 16K - 2 io_apic 1 2K - 1 2048 eli data 10 2K - 19288 64,256,512,1024,2048,4096 osd 20 1K - 236 16,64 acpiintr 1 1K - 1 64 MCA 9 2K - 9 64,128 msi 1 1K - 1 128 nexusdev 5 1K - 5 16 subproc 575 886K - 42983 512,4096 proc 2 16K - 2 session 61 8K - 6412 128 pgrp 75 10K - 17626 128 solaris 2696947 3948549K - 31910129 16,32,64,128,256,512,1024,2048,4096 kstat_data 4 1K - 4 64 dmesg: Copyright (c) 1992-2012 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.0-RELEASE #1: Fri Jan 20 22:23:20 CET 2012 claudius@server.ambtec.de:/usr/obj/usr/src/sys/CUSTOM amd64 CPU: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz (2673.35-MHz K8-class CPU) Origin =3D "GenuineIntel" Id =3D 0x106a5 Family =3D 6 Model =3D 1a Stepping =3D 5 Features=3D0xbfebfbff Features2=3D0x98e3bd AMD Features=3D0x28100800 AMD Features2=3D0x1 TSC: P-state invariant, performance statistics real memory =3D 8589934592 (8192 MB) avail memory =3D 8230801408 (7849 MB) Event timer "LAPIC" quality 400 ACPI APIC Table: <7522MS A7522800> FreeBSD/SMP: Multiprocessor System Detected: 8 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) x 2 SMT threads cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 1 cpu2 (AP): APIC ID: 2 cpu3 (AP): APIC ID: 3 cpu4 (AP): APIC ID: 4 cpu5 (AP): APIC ID: 5 cpu6 (AP): APIC ID: 6 cpu7 (AP): APIC ID: 7 ioapic0 irqs 0-23 on motherboard kbd1 at kbdmux0 cryptosoft0: on motherboard acpi0: <7522MS A7522800> on motherboard acpi0: Power Button (fixed) acpi0: reservation of 0, a0000 (3) failed acpi0: reservation of 100000, bff00000 (3) failed Timecounter "ACPI-fast" frequency 3579545 Hz quality 900 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x808-0x80b on acpi0 cpu0: on acpi0 ACPI Warning: Incorrect checksum in table [OEMB] - 0xBB, should be 0xBA (20110527/tbutils-282) cpu1: on acpi0 cpu2: on acpi0 cpu3: on acpi0 cpu4: on acpi0 cpu5: on acpi0 cpu6: on acpi0 cpu7: on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pci0: on pcib0 pcib1: at device 1.0 on pci0 pci1: on pcib1 pcib2: at device 3.0 on pci0 pci2: on pcib2 vgapci0: port 0xcc00-0xcc7f mem 0xfa000000-0xfaffffff,0xd0000000-0xdfffffff,0xf8000000-0xf9ffffff irq 16 at device 0.0 on pci2 pcib3: at device 7.0 on pci0 pci3: on pcib3 pci0: at device 20.0 (no driver attached) pci0: at device 20.1 (no driver attached) pci0: at device 20.2 (no driver attached) pci0: at device 20.3 (no driver attached) uhci0: port 0xbc00-0xbc1f irq 16 at device 26.0 on pci0 usbus0: on uhci0 uhci1: port 0xb880-0xb89f irq 21 at device 26.1 on pci0 usbus1: on uhci1 uhci2: port 0xb800-0xb81f irq 19 at device 26.2 on pci0 usbus2: on uhci2 ehci0: mem 0xf7ffe000-0xf7ffe3ff irq 18 at device 26.7 on pci0 usbus3: EHCI version 1.0 usbus3: on ehci0 pcib4: irq 17 at device 28.0 on pci0 pci4: on pcib4 pcib5: irq 17 at device 28.4 on pci0 pci6: on pcib5 re0: port 0xe800-0xe8ff mem 0xfbeff000-0xfbefffff,0xf6ff0000-0xf6ffffff irq 16 at device 0.0 on pci6 re0: Using 1 MSI-X message re0: Chip rev. 0x3c000000 re0: MAC rev. 0x00400000 miibus0: on re0 rgephy0: PHY 1 on miibus= 0 rgephy0: none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow re0: Ethernet address: 40:61:86:2b:86:aa uhci3: port 0xb480-0xb49f irq 23 at device 29.0 on pci0 usbus4: on uhci3 uhci4: port 0xb400-0xb41f irq 19 at device 29.1 on pci0 usbus5: on uhci4 uhci5: port 0xb080-0xb09f irq 18 at device 29.2 on pci0 usbus6: on uhci5 ehci1: mem 0xf7ffc000-0xf7ffc3ff irq 23 at device 29.7 on pci0 usbus7: EHCI version 1.0 usbus7: on ehci1 pcib6: at device 30.0 on pci0 pci7: on pcib6 isab0: at device 31.0 on pci0 isa0: on isab0 atapci0: port 0xb000-0xb007,0xac00-0xac03,0xa880-0xa887,0xa800-0xa803,0xa480-0xa49f mem 0xf7ffa000-0xf7ffa7ff irq 19 at device 31.2 on pci0 atapci0: AHCI v1.20 controller with 6 3Gbps ports, PM not supported ata2: on atapci0 ata3: on atapci0 ata4: on atapci0 ata5: on atapci0 ata6: on atapci0 ata7: on atapci0 pci0: at device 31.3 (no driver attached) acpi_button0: on acpi0 attimer0: port 0x40-0x43 irq 0 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 atrtc0: port 0x70-0x71 irq 8 on acpi0 Event timer "RTC" frequency 32768 Hz quality 0 atkbdc0: port 0x60,0x64 irq 1 on acpi0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] hpet0: iomem 0xfed00000-0xfed003ff on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 Event timer "HPET" frequency 14318180 Hz quality 450 Event timer "HPET1" frequency 14318180 Hz quality 440 Event timer "HPET2" frequency 14318180 Hz quality 440 Event timer "HPET3" frequency 14318180 Hz quality 440 qpi0: on motherboard pcib7: pcibus 255 on qpi0 pci255: on pcib7 orm0: at iomem 0xce800-0xcf7ff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=3D0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0= coretemp0: on cpu0 est0: on cpu0 p4tcc0: on cpu0 coretemp1: on cpu1 est1: on cpu1 p4tcc1: on cpu1 coretemp2: on cpu2 est2: on cpu2 p4tcc2: on cpu2 coretemp3: on cpu3 est3: on cpu3 p4tcc3: on cpu3 coretemp4: on cpu4 est4: on cpu4 p4tcc4: on cpu4 coretemp5: on cpu5 est5: on cpu5 p4tcc5: on cpu5 coretemp6: on cpu6 est6: on cpu6 p4tcc6: on cpu6 coretemp7: on cpu7 est7: on cpu7 p4tcc7: on cpu7 ZFS filesystem version 5 ZFS storage pool version 28 Timecounters tick every 1.000 msec usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 12Mbps Full Speed USB v1.0 usbus3: 480Mbps High Speed USB v2.0 usbus4: 12Mbps Full Speed USB v1.0 usbus5: 12Mbps Full Speed USB v1.0 usbus6: 12Mbps Full Speed USB v1.0 usbus7: 480Mbps High Speed USB v2.0 ugen0.1: at usbus0 uhub0: on usbus0 ugen1.1: at usbus1 uhub1: on usbus1 ugen2.1: at usbus2 uhub2: on usbus2 ugen3.1: at usbus3 uhub3: on usbus3 ugen4.1: at usbus4 uhub4: on usbus4 ugen5.1: at usbus5 uhub5: on usbus5 ugen6.1: at usbus6 uhub6: on usbus6 ugen7.1: at usbus7 uhub7: on usbus7 uhub0: 2 ports with 2 removable, self powered uhub1: 2 ports with 2 removable, self powered uhub2: 2 ports with 2 removable, self powered uhub4: 2 ports with 2 removable, self powered uhub5: 2 ports with 2 removable, self powered uhub6: 2 ports with 2 removable, self powered ada0 at ata2 bus 0 scbus0 target 0 lun 0 ada0: ATA-7 SATA 2.x device ada0: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada0: 715404MB (1465149168 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad0 ada1 at ata3 bus 0 scbus1 target 0 lun 0 ada1: ATA-7 SATA 2.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada1: 715404MB (1465149168 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad1 SMP: AP CPU #1 Launched! SMP: AP CPU #5 Launched! SMP: AP CPU #7 Launched! SMP: AP CPU #3 Launched! SMP: AP CPU #4 Launched! SMP: AP CPU #2 Launched! SMP: AP CPU #6 Launched! uhub3: 6 ports with 6 removable, self powered uhub7: 6 ports with 6 removable, self powered GEOM_MIRROR: Device mirror/swap launched (2/2). Trying to mount root from zfs:zpool/rootfs [rw,noatime]... GEOM_ELI: Device mirror/swap.eli created. GEOM_ELI: Encryption: AES-XTS 256 GEOM_ELI: Crypto: software --------------enig06918E1878A4A49BFC2E5262 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk8aFWQACgkQum7BwTrPFfEpVwCbBTdEypIRJMQs2gL+MXn+xyY9 Y80AoMABd41pv9kLgJlBvSkGabU1aGmH =QEhj -----END PGP SIGNATURE----- --------------enig06918E1878A4A49BFC2E5262-- From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 03:43:18 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 71CDF106566C for ; Sat, 21 Jan 2012 03:43:18 +0000 (UTC) (envelope-from freebsd@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 82F9D8FC12 for ; Sat, 21 Jan 2012 03:43:17 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q0L3h9QV097742; Fri, 20 Jan 2012 19:43:09 -0800 (PST) (envelope-from freebsd@pki2.com) From: Dennis Glatting To: Jeremy Chadwick In-Reply-To: <20120120181828.GA1049@icarus.home.lan> References: <4F192ADA.5020903@brockmann-consult.de> <1327069331.29444.4.camel@btw.pki2.com> <20120120153129.GA97746@icarus.home.lan> <1327077094.29408.11.camel@btw.pki2.com> <20120120181828.GA1049@icarus.home.lan> Content-Type: text/plain; charset="ISO-8859-1" Date: Fri, 20 Jan 2012 19:43:08 -0800 Message-ID: <1327117388.29408.24.camel@btw.pki2.com> Mime-Version: 1.0 X-Mailer: Evolution 2.32.1 FreeBSD GNOME Team Port Content-Transfer-Encoding: 7bit X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q0L3h9QV097742 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: freebsd@pki2.com Cc: freebsd-fs@freebsd.org Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 03:43:18 -0000 Data points update: I thought this problem may be related to a specific RAID controller (LSI 9211-8i - "R") first used on the disks. So I used it on a new, different set of disks. Those disks work fine afterwards: ada3 at ata0 bus 0 scbus6 target 0 lun 0 ada3: ATA-8 SATA 3.x device ada3: 150.000MB/s transfers (SATA, UDMA6, PIO 8192bytes) ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada3: Previously was known as ad0 ada4: ATA-8 SATA 3.x device ada4: 150.000MB/s transfers (SATA, UDMA6, PIO 8192bytes) ada4: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada4: Previously was known as ad1 bd3# dd if=/dev/zero of=/dev/ada3 count=8 8+0 records in 8+0 records out 4096 bytes transferred in 0.006000 secs (682662 bytes/sec) bd3# dd if=/dev/zero of=/dev/ada4 count=8 8+0 records in 8+0 records out 4096 bytes transferred in 0.001953 secs (2097408 bytes/sec) I used Seatools on one of the disks from the first set (ST1000DL002-9TT153). On a long test the tools declared there were errors that it could not fix. I didn't see much point in trying the second disk. So, two separately purchased disks from the same vendor bad? (TigerDirect) What's the odds of that? Hmm... On Fri, 2012-01-20 at 10:18 -0800, Jeremy Chadwick wrote: > On Fri, Jan 20, 2012 at 08:31:34AM -0800, Dennis Glatting wrote: > > On Fri, 2012-01-20 at 07:31 -0800, Jeremy Chadwick wrote: > > > > > On Fri, Jan 20, 2012 at 06:22:11AM -0800, Dennis Glatting wrote: > > > > I am having a problem with Seagate ST1000DL002 disks but I haven't yet > > > > determined weather it is the disks themselves (they -- two of them, new > > > > -- fail under a MB controller too. > > > > > > Assuming the disks are seen directly on the bus (e.g. show up as daX, > > > adaX, or whatever), please install ports/sysutils/smartmontools (make > > > sure you're using version 5.42 or newer) and please provide output from > > > the following command: "smartctl -a /dev/XXX" where XXX is the device > > > name of the ST1000DL002 disk(s). Please be sure to state which device > > > name is associated with which smartctl output. You can delete or > > > remove the disk serial numbers from the output (for privacy) if you > > > wish. I'll be happy to review the data and tell you whether or not the > > > disks themselves are showing problems or if the issue is elsewhere. > > > > That is the motivation I needed to reboot that system, which was 50% > > through a task. That said, as remains the case today, for the last 20 > > years I haven't been able to find that "Any Key" on reboot. :) > > > > Regardless... > > First off, let's start with the full picture. Readers need to know > exactly what is going on within your controller setup, what disks are > connected to what, etc.. Taken from your full dmesg below, and turned > into something easy-to-read (mostly) > > Controller mps0 > --> LSI SAS2008 > --> IRQ 19 on pci1 > --> Firmware 12.00.00.00 > --> Disks attached: > --> da0 --> WDC WD25EZRS, SATA300 > --> da1 --> WDC WD25EZRS, SATA300 > --> da2 --> WDC WD25EZRS, SATA300 > --> da3 --> WDC WD25EZRS, SATA300 > --> da4 --> WDC WD25EZRS, SATA300 > --> da5 --> WDC WD25EZRS, SATA300 > --> da6 --> WDC WD25EZRS, SATA300 > --> da7 --> WDC WD25EZRS, SATA300 > > Controller mps1 > --> LSI SAS2008 > --> IRQ 19 on pci5 > --> Firmware 12.00.00.00 > --> Disks attached: > --> None > > Controller mps2 > --> LSI SAS2008 > --> IRQ 16 on pci6 > --> Firmware 12.00.00.00 > --> Disks attached: > --> da8 --> WDC WD25EZRS, SATA300 > --> da9 --> WDC WD25EZRS, SATA300 > --> da10 --> WDC WD25EZRS, SATA300 > --> da11 --> WDC WD25EZRS, SATA300 > --> da12 --> ST1000DL002, SATA300 > > Controller ahci0 > --> ATI IXP700 AHCI (4-port) > --> IRQ 19 on pci0 > --> Disks attached: > --> ahcich0 --> ada0 --> Corsair Force 3 SSD, SATA600 > --> ahcich1 --> ada1 --> OCZ-AGILITY2 SSD, SATA300 > --> ahcich2 --> ada2 --> ST31000333AS, SATA300 > > Controller ata0 > --> ATI IXP700/800 ATA133 (2-port/4-device, PATA) > --> IRQ on pci0 > --> I would assume this would be on IRQ 14 or 15, sigh... > --> Disks attached: > --> None > > Now that we have a full picture, let's continue. > > > An attempt to write to it: > > > > bd3# dd if=/dev/zero of=/dev/da12 > > dd: /dev/da12: Input/output error > > 1+0 records in > > 0+0 records out > > 0 bytes transferred in 0.378153 secs (0 bytes/sec) > > The dd command you executed to write zeros to the disk, 512-bytes at > time, starting at LBA 0, failed when writing the first 512 bytes. So, > from my perspective, writing to LBA 0 is failing. > > You should also keep in mind that this dd command to zero the disk (if > it was to work) would take a very long time to complete. If you used a > larger block size (bs=64k or maybe larger), it would be a lot faster. > Just a tip. Starting with bs=512 (default) is fine, or in this case > using 4096 would probably be better (see below), but whatever. > > > The disk is presently connected to this device (LSI 9211-8i) but I have > > also had it connected to the devices on the MB and I think to a > > SuperMicro board. I have also tried a different LSI board. > > Thanks for sharing this -- this is important information, but let's not > start moving the drive around any more, okay? There's no point. The > information you've given is enough, and I'll explain it in detail. > > > {snipping for brevity} > > > > bd3# smartctl -a /dev/da12 > > smartctl 5.42 2011-10-20 r3458 [FreeBSD 9.0-STABLE amd64] (local build) > > Copyright (C) 2002-11 by Bruce Allen, > > http://smartmontools.sourceforge.net > > > > === START OF INFORMATION SECTION === > > Model Family: Seagate Barracuda Green (Adv. Format) > > Device Model: ST1000DL002-9TT153 > > Serial Number: W1V06SLR > > LU WWN Device Id: 5 000c50 037e11be9 > > Firmware Version: CC32 > > User Capacity: 1,000,204,886,016 bytes [1.00 TB] > > Sector Size: 512 bytes logical/physical > > Device is: In smartctl database [for details use: -P show] > > ATA Version is: 8 > > ATA Standard is: ATA-8-ACS revision 4 > > Local Time is: Fri Jan 20 08:22:34 2012 PST > > SMART support is: Available - device has SMART capability. > > SMART support is: Enabled > > > > {snipping for brevity} > > > > SMART Attributes Data Structure revision number: 10 > > Vendor Specific SMART Attributes with Thresholds: > > ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE > > 1 Raw_Read_Error_Rate 0x000f 108 099 006 Pre-fail Always - 241488 > > 3 Spin_Up_Time 0x0003 087 070 000 Pre-fail Always - 0 > > 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 28 > > 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 > > 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 136324 > > 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 576 > > 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 > > 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 29 > > 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 > > 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 > > 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 > > 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 > > 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 > > 190 Airflow_Temperature_Cel 0x0022 073 062 045 Old_age Always - 27 (Min/Max 21/27) > > 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 > > 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 23 > > 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 29 > > 194 Temperature_Celsius 0x0022 027 040 000 Old_age Always - 27 (0 21 0 0 0) > > 195 Hardware_ECC_Recovered 0x001a 027 008 000 Old_age Always - 241488 > > 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 > > 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 > > 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 > > 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 265544943010369 > > 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 3746932548 > > 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3212957483 > > > > SMART Error Log Version: 1 > > No Errors Logged > > > > {snipping more} > > Your SMART attributes here appear perfectly fine. There is no > indication of bad LBAs (sectors) on the drive, or even "suspect" LBAs on > the drive. If LBA 0, for example, was actually bad (meaning the sector > itself), that would show up in the SMART error log (most likely), and if > not there, at bare minimum as some form of incremented RAW_VALUE field > in one of many attributes (either 5, 197, or 198; possibly 187, I forget). > > SMART attributes 1, 7, and 195 on Seagate drives are always "crazy"; > that is to say, they are not incremental counters, they are > vendor-encoded. smartmontools does not know how to decode some of these > attributes (on SOME Seagate drives it does, on others it doesn't). I > state this because people read SMART attributes wrong ~70% of the time; > they see non-zero numbers and go "oh my god, it's broken!" No it isn't. > SMART attribute values/decoding are not part of the ATA specification > (even working draft), so it's all proprietary more or less. > > I also want to assume attribute 240 is vendor-encoded as well, probably > as multiple data sets stored within the full 6-byte attribute field; > again, smartmontools doesn't know how to decode this. I wouldn't worry > about this, again even though the number is huge. :-) > > SMART attribute 184 keeps track of errors occurring between the drive > controller (on the PCB) and the drive cache; there are no cache errors. > That's good, and I'm glad to see vendors implementing this. > > SMART attribute 188 indicates the drive itself has not counted any > command timeouts (these would be ATA commands sent from the OS through > the SATA/SAS controller to the drive controller, which timed out at the > phase when the drive attempted to read/write data from a sector). > > SMART attribute 199 indicates there are no cabling problems or "physical > issues between the disk and the SATA/SAS controller" (bad connectors, > dust in the connectors, shoddy hot-swap plane, bad port, etc.). > > SMART attribute 183 is something I haven't seen before (I'm more > familiar with Western Digital disks), but it also looks fine. > > So again: your drive looks perfectly healthy per SMART stats. But > there's something amusing about this situation that a lot of people > overlook... > > > {snipping dmesg for brevity, but here's the URL for readers so they > > can see it themselves: > > http://lists.freebsd.org/pipermail/freebsd-fs/2012-January/013481.html > > } > > > > {simplify the SCSI errors shown} > > > > (da12:mps2:0:5:0): READ(6). CDB: 8 0 0 1 1 0 > > (da12:mps2:0:5:0): CAM status: SCSI Status Error > > (da12:mps2:0:5:0): SCSI status: Check Condition > > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) > > (da12:mps2:0:5:0): SYNCHRONIZE CACHE(10). CDB: 35 0 0 0 0 0 0 0 0 0 > > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) > > (da12:mps2:0:5:0): READ(10). CDB: 28 0 74 70 6d af 0 0 1 0 > > (da12:mps2:0:5:0): CAM status: SCSI Status Error > > (da12:mps2:0:5:0): SCSI status: Check Condition > > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) > > (da12:mps2:0:5:0): WRITE(6). CDB: a 0 0 0 1 0 > > (da12:mps2:0:5:0): CAM status: SCSI Status Error > > (da12:mps2:0:5:0): SCSI status: Check Condition > > (da12:mps2:0:5:0): SCSI sense: ABORTED COMMAND asc:0,0 (No additional sense information) > > Based on this, we know the following: > > - The da12 disk is doing something weird when it comes to reads AND > writes. > - The da12 disk is not timing out; it receives an immediate error on > reads and writes (coming back from the controller; whether or not the > ATA command block makes it to the disk is unknown, but I have to > assume it does). > - The da12 disk, at one time, was working/usable as indicated by some > SMART attributes. > - The da12 disk is the only ST1000DL002 disk in the system. > - The da12 disk is on the same controller as 4 other disks. > - The da8 through da11 disks (WD25EZRS) on the mps2 controller are > performing fine with no issues (I have to assume this). > - The ST1000DL002 disk is an Advanced Format disk (4096-byte sectors). > - All the WD25EZRS disks are Advanced Format disks (4096-byte sectors). > - The ST1000DL002 disk behaves badly when used on the on-board AHCI > controller as well as a completely different motherboard (presumably). > > Here's the fun part: > > ATA commands being submit from the OS to the disk (specifically the > controller on the disk itself) are working fine. SMART attributes are > obtained via an ATA command that, internally on mechanical drives, > fetches data from the HPA (Host Protected Area) region of the drive (see > Wikipedia if you don't know about this), and returns that data. AFAIK > this data is not cached in any way, it's almost always read straight > from the HPA. > > So this means we know I/O communication between the OS and controller, > and the controller and the disk, works fine. And we also know, at least > with regards to the HPA region, that the heads can read data from the HPA > region successfully. Great. > > Could this be a controller problem (e.g. a firmware bug that affects > compatibility with ST1000DL002 drives)? I'm about 95% certain the > answer is no. The reason is that the ST1000DL002 drive behaved the same > when put on other controllers. > > What all this means is that the drive, in effect, refuses to read data > from non-HPA regions of the disk -- that means LBA 0 to . Why > or how could this happen? Unknown, because there's a *ton* of > possibilities -- way more than I care to speculate. :-) > > Have I seen this problem before? Yes -- many times, but only once with > a SATA drive: > > - I see this on rare occasion with Fujitsu SCSI disks at my workplace, > where the drives flat out refuse to do I/O any longer. However, these > return a vendor-specific ASC + ASCQ that indicate the drive is in a > "locked" or "frozen" state, requiring Fujitsu to investigate. I've seen > it happen a good 10, maybe 20 times over the past few years on drives > manufactured from 2001 to 2007. Thankfully Fujitsu provides full docs > on their SCSI drives so I was able to look up the ASC/ASCQ and figure > out it was an internal drive failure. We disposed of the disks > properly/securely. > > - In the SATA case, the end-user's drive behaved the same as yours. I > do not remember what brand (it really doesn't matter though). In their > case, however, the HPA region was corrupt; the drive spit out weird > errors during SMART attribute fetch, and those attributes which it did > fetch were *completely* garbled. My guess was a bad HPA region of the > drive, combined with either a firmware bug or something mechanical or > head problems. The end-user RMA'd the drive and the replacement worked > fine. > > My advice at this point (#1 is optional): > > 1. If you're curious and just interested in learning: put the > ST1000DL002 disk on a system where it's the only disk, and hooked > directly to the motherboard (and not in AHCI mode), and boot SeaTools > from a CD or USB stick. > > I'm willing to bet you get back an error code on the quick/short test > (which does more than just a SMART short test). If that does pass, try > doing a long test (which reads all the LBAs on the drive). I'll be > very, VERY surprised if that passes. > > 2. File an RMA with Seagate. The simple version is that all LBA I/O > (standard read/write) is being rejected by the drive for unknown > reasons. > > Good luck, and hope this sheds some light on the "fun" (or not so fun) > world of hard disk troubleshooting. And don't ask me to troubleshoot an > SSD. ;-) > From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 08:13:04 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E0581065674; Sat, 21 Jan 2012 08:13:04 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id 976DD8FC0A; Sat, 21 Jan 2012 08:13:01 +0000 (UTC) Received: from skuns.kiev.zoral.com.ua (localhost [127.0.0.1]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id q0L8Cvge031546 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 21 Jan 2012 10:12:57 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5) with ESMTP id q0L8CvSC003717; Sat, 21 Jan 2012 10:12:57 +0200 (EET) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.5/8.14.5/Submit) id q0L8CvPH003716; Sat, 21 Jan 2012 10:12:57 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 21 Jan 2012 10:12:57 +0200 From: Kostik Belousov To: John Baldwin Message-ID: <20120121081257.GS31224@deviant.kiev.zoral.com.ua> References: <201201181707.21293.jhb@freebsd.org> <201201191026.09431.jhb@freebsd.org> <20120119160156.GF31224@deviant.kiev.zoral.com.ua> <201201191117.28128.jhb@freebsd.org> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="WWZbIj0qUfl+CqYL" Content-Disposition: inline In-Reply-To: <201201191117.28128.jhb@freebsd.org> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 08:13:04 -0000 --WWZbIj0qUfl+CqYL Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Jan 19, 2012 at 11:17:28AM -0500, John Baldwin wrote: > On Thursday, January 19, 2012 11:01:56 am Kostik Belousov wrote: > > On Thu, Jan 19, 2012 at 10:26:09AM -0500, John Baldwin wrote: > > > On Thursday, January 19, 2012 9:06:13 am Kostik Belousov wrote: > > > > On Wed, Jan 18, 2012 at 05:07:21PM -0500, John Baldwin wrote: > > > > ... > > > > > What I concluded is that it would really be far simpler and more > > > > > obvious if the cached timestamps were stored in the namecache ent= ry > > > > > directly rather than having multiple name cache entries validated= by > > > > > shared state in the nfsnode. This does mean allowing the name ca= che > > > > > to hold some filesystem-specific state. However, I felt this was= much > > > > > cleaner than adding a lot more complexity to nfs_lookup(). Also,= this > > > > > turns out to be fairly non-invasive to implement since nfs_lookup= () > > > > > calls cache_lookup() directly, but other filesystems only call it > > > > > indirectly via vfs_cache_lookup(). I considered letting filesyst= ems > > > > > store a void * cookie in the name cache entry and having them pro= vide > > > > > a destructor, etc. However, that would require extra allocations= for > > > > > NFS lookups. Instead, I just adjusted the name cache API to > > > > > explicitly allow the filesystem to store a single timestamp in a = name > > > > > cache entry by adding a new 'cache_enter_time()' that accepts a s= truct > > > > > timespec that is copied into the entry. 'cache_enter_time()' also > > > > > saves the current value of 'ticks' in the entry. 'cache_lookup()= ' is > > > > > modified to add two new arguments used to return the timespec and > > > > > ticks value used for a namecache entry when a hit in the cache oc= curs. > > > > >=20 > > > > > One wrinkle with this is that the name cache does not create actu= al > > > > > entries for ".", and thus it would not store any timestamps for t= hose > > > > > lookups. To fix this I changed the NFS client to explicitly fast= -path > > > > > lookups of "." by always returning the current directory as setup= by > > > > > cache_lookup() and never bothering to do a LOOKUP or check for st= ale > > > > > attributes in that case. > > > > >=20 > > > > > The current patch against 8 is at > > > > > http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch > > > > ... > > > >=20 > > > > So now you add 8*2+4 bytes to each namecache entry on amd64 uncondi= tionally. > > > > Current size of the struct namecache invariant part on amd64 is 72 = bytes, > > > > so addition of 20 bytes looks slightly excessive. I am not sure abo= ut > > > > typical distribution of the namecache nc_name length, so it is unob= vious > > > > does the change changes the memory usage significantly. > > > >=20 > > > > A flag could be added to nc_flags to indicate the presence of times= tamp. > > > > The timestamps would be conditionally placed after nc_nlen, we prob= ably > > > > could use union to ease the access. Then, the direct dereferences of > > > > nc_name would need to be converted to some inline function. > > > >=20 > > > > I can do this after your patch is committed, if you consider the me= mory > > > > usage saving worth it. > > >=20 > > > Hmm, if the memory usage really is worrying then I could move to usin= g the > > > void * cookie method instead. > >=20 > > I think the current approach is better then cookie that again will be > > used only for NFS. With the cookie, you still has 8 bytes for each ncp. > > With union, you do not have the overhead for !NFS. > >=20 > > Default setup allows for ~300000 vnodes on not too powerful amd64 machi= ne, > > the ncsizefactor 2 together with 8 bytes for cookie is 4.5MB. For 20 by= tes > > per ncp, we get 12MB overhead. >=20 > Ok. If you want to tackle the union bits I'm happy to let you do so. Th= at > will at least break up the changes a bit. Below is my take. First version of the patch added both small and large zones with ts, but later I decided that large does not make sense. If wanted, it can be restored easily. diff --git a/sys/kern/vfs_cache.c b/sys/kern/vfs_cache.c index aa269de..c11f25f 100644 --- a/sys/kern/vfs_cache.c +++ b/sys/kern/vfs_cache.c @@ -97,14 +97,36 @@ struct namecache { TAILQ_ENTRY(namecache) nc_dst; /* destination vnode list */ struct vnode *nc_dvp; /* vnode of parent of name */ struct vnode *nc_vp; /* vnode the name refers to */ - struct timespec nc_time; /* timespec provided by fs */ - int nc_ticks; /* ticks value when entry was added */ u_char nc_flag; /* flag bits */ u_char nc_nlen; /* length of name */ char nc_name[0]; /* segment name + nul */ }; =20 /* + * struct namecache_ts repeats struct namecache layout up to the + * nc_nlen member. + */ +struct namecache_ts { + LIST_ENTRY(namecache) nc_hash; /* hash chain */ + LIST_ENTRY(namecache) nc_src; /* source vnode list */ + TAILQ_ENTRY(namecache) nc_dst; /* destination vnode list */ + struct vnode *nc_dvp; /* vnode of parent of name */ + struct vnode *nc_vp; /* vnode the name refers to */ + u_char nc_flag; /* flag bits */ + u_char nc_nlen; /* length of name */ + struct timespec nc_time; /* timespec provided by fs */ + int nc_ticks; /* ticks value when entry was added */ + char nc_name[0]; /* segment name + nul */ +}; + +/* + * Flags in namecache.nc_flag + */ +#define NCF_WHITE 0x01 +#define NCF_ISDOTDOT 0x02 +#define NCF_TS 0x04 + +/* * Name caching works as follows: * * Names found by directory scans are retained in a cache @@ -166,20 +188,50 @@ RW_SYSINIT(vfscache, &cache_lock, "Name Cache"); * fit in the small cache. */ static uma_zone_t cache_zone_small; +static uma_zone_t cache_zone_small_ts; static uma_zone_t cache_zone_large; =20 #define CACHE_PATH_CUTOFF 35 -#define CACHE_ZONE_SMALL (sizeof(struct namecache) + CACHE_PATH_CUTOFF \ - + 1) -#define CACHE_ZONE_LARGE (sizeof(struct namecache) + NAME_MAX + 1) - -#define cache_alloc(len) uma_zalloc(((len) <=3D CACHE_PATH_CUTOFF) ? \ - cache_zone_small : cache_zone_large, M_WAITOK) -#define cache_free(ncp) do { \ - if (ncp !=3D NULL) \ - uma_zfree(((ncp)->nc_nlen <=3D CACHE_PATH_CUTOFF) ? \ - cache_zone_small : cache_zone_large, (ncp)); \ -} while (0) + +static struct namecache * +cache_alloc(int len, int ts) +{ + + if (len > CACHE_PATH_CUTOFF) + return (uma_zalloc(cache_zone_large, M_WAITOK)); + if (ts) + return (uma_zalloc(cache_zone_small_ts, M_WAITOK)); + else + return (uma_zalloc(cache_zone_small, M_WAITOK)); +} + +static void +cache_free(struct namecache *ncp) +{ + int ts; + + if (ncp =3D=3D NULL) + return; + ts =3D ncp->nc_flag & NCF_TS; + if (ncp->nc_nlen <=3D CACHE_PATH_CUTOFF) { + if (ts) + uma_zfree(cache_zone_small_ts, ncp); + else + uma_zfree(cache_zone_small, ncp); + } else + uma_zfree(cache_zone_large, ncp); +} + +static char * +nc_get_name(struct namecache *ncp) +{ + struct namecache_ts *ncp_ts; + + if ((ncp->nc_flag & NCF_TS) =3D=3D 0) + return (ncp->nc_name); + ncp_ts =3D (struct namecache_ts *)ncp; + return (ncp_ts->nc_name); +} =20 static int doingcache =3D 1; /* 1 =3D> enable the cache */ SYSCTL_INT(_debug, OID_AUTO, vfscache, CTLFLAG_RW, &doingcache, 0, @@ -235,12 +287,6 @@ static int vn_fullpath1(struct thread *td, struct vnod= e *vp, struct vnode *rdir, =20 static MALLOC_DEFINE(M_VFSCACHE, "vfscache", "VFS name cache entries"); =20 -/* - * Flags in namecache.nc_flag - */ -#define NCF_WHITE 0x01 -#define NCF_ISDOTDOT 0x02 - #ifdef DIAGNOSTIC /* * Grab an atomic snapshot of the name cache hash chain lengths @@ -346,10 +392,10 @@ cache_zap(ncp) #ifdef KDTRACE_HOOKS if (ncp->nc_vp !=3D NULL) { SDT_PROBE(vfs, namecache, zap, done, ncp->nc_dvp, - ncp->nc_name, ncp->nc_vp, 0, 0); + nc_get_name(ncp), ncp->nc_vp, 0, 0); } else { SDT_PROBE(vfs, namecache, zap_negative, done, ncp->nc_dvp, - ncp->nc_name, 0, 0, 0); + nc_get_name(ncp), 0, 0, 0); } #endif vp =3D NULL; @@ -460,10 +506,17 @@ retry_wlocked: dvp, cnp->cn_nameptr, *vpp); SDT_PROBE(vfs, namecache, lookup, hit, dvp, "..", *vpp, 0, 0); - if (tsp !=3D NULL) - *tsp =3D ncp->nc_time; - if (ticksp !=3D NULL) - *ticksp =3D ncp->nc_ticks; + if (tsp !=3D NULL) { + KASSERT((ncp->nc_flag & NCF_TS) !=3D 0, + ("No NCF_TS")); + *tsp =3D ((struct namecache_ts *)ncp)->nc_time; + } + if (ticksp !=3D NULL) { + KASSERT((ncp->nc_flag & NCF_TS) !=3D 0, + ("No NCF_TS")); + *ticksp =3D ((struct namecache_ts *)ncp)-> + nc_ticks; + } goto success; } } @@ -473,7 +526,7 @@ retry_wlocked: LIST_FOREACH(ncp, (NCHHASH(hash)), nc_hash) { numchecks++; if (ncp->nc_dvp =3D=3D dvp && ncp->nc_nlen =3D=3D cnp->cn_namelen && - !bcmp(ncp->nc_name, cnp->cn_nameptr, ncp->nc_nlen)) + !bcmp(nc_get_name(ncp), cnp->cn_nameptr, ncp->nc_nlen)) break; } =20 @@ -508,12 +561,16 @@ retry_wlocked: *vpp =3D ncp->nc_vp; CTR4(KTR_VFS, "cache_lookup(%p, %s) found %p via ncp %p", dvp, cnp->cn_nameptr, *vpp, ncp); - SDT_PROBE(vfs, namecache, lookup, hit, dvp, ncp->nc_name, + SDT_PROBE(vfs, namecache, lookup, hit, dvp, nc_get_name(ncp), *vpp, 0, 0); - if (tsp !=3D NULL) - *tsp =3D ncp->nc_time; - if (ticksp !=3D NULL) - *ticksp =3D ncp->nc_ticks; + if (tsp !=3D NULL) { + KASSERT((ncp->nc_flag & NCF_TS) !=3D 0, ("No NCF_TS")); + *tsp =3D ((struct namecache_ts *)ncp)->nc_time; + } + if (ticksp !=3D NULL) { + KASSERT((ncp->nc_flag & NCF_TS) !=3D 0, ("No NCF_TS")); + *ticksp =3D ((struct namecache_ts *)ncp)->nc_ticks; + } goto success; } =20 @@ -543,12 +600,16 @@ negative_success: nchstats.ncs_neghits++; if (ncp->nc_flag & NCF_WHITE) cnp->cn_flags |=3D ISWHITEOUT; - SDT_PROBE(vfs, namecache, lookup, hit_negative, dvp, ncp->nc_name, + SDT_PROBE(vfs, namecache, lookup, hit_negative, dvp, nc_get_name(ncp), 0, 0, 0); - if (tsp !=3D NULL) - *tsp =3D ncp->nc_time; - if (ticksp !=3D NULL) - *ticksp =3D ncp->nc_ticks; + if (tsp !=3D NULL) { + KASSERT((ncp->nc_flag & NCF_TS) !=3D 0, ("No NCF_TS")); + *tsp =3D ((struct namecache_ts *)ncp)->nc_time; + } + if (ticksp !=3D NULL) { + KASSERT((ncp->nc_flag & NCF_TS) !=3D 0, ("No NCF_TS")); + *ticksp =3D ((struct namecache_ts *)ncp)->nc_ticks; + } CACHE_WUNLOCK(); return (ENOENT); =20 @@ -642,6 +703,7 @@ cache_enter_time(dvp, vp, cnp, tsp) struct timespec *tsp; { struct namecache *ncp, *n2; + struct namecache_ts *n3; struct nchashhead *ncpp; uint32_t hash; int flag; @@ -708,18 +770,19 @@ cache_enter_time(dvp, vp, cnp, tsp) * Calculate the hash key and setup as much of the new * namecache entry as possible before acquiring the lock. */ - ncp =3D cache_alloc(cnp->cn_namelen); + ncp =3D cache_alloc(cnp->cn_namelen, tsp !=3D NULL); ncp->nc_vp =3D vp; ncp->nc_dvp =3D dvp; ncp->nc_flag =3D flag; - if (tsp !=3D NULL) - ncp->nc_time =3D *tsp; - else - timespecclear(&ncp->nc_time); - ncp->nc_ticks =3D ticks; + if (tsp !=3D NULL) { + n3 =3D (struct namecache_ts *)ncp; + n3->nc_time =3D *tsp; + n3->nc_ticks =3D ticks; + n3->nc_flag |=3D NCF_TS; + } len =3D ncp->nc_nlen =3D cnp->cn_namelen; hash =3D fnv_32_buf(cnp->cn_nameptr, len, FNV1_32_INIT); - strlcpy(ncp->nc_name, cnp->cn_nameptr, len + 1); + strlcpy(nc_get_name(ncp), cnp->cn_nameptr, len + 1); hash =3D fnv_32_buf(&dvp, sizeof(dvp), hash); CACHE_WLOCK(); =20 @@ -732,9 +795,16 @@ cache_enter_time(dvp, vp, cnp, tsp) LIST_FOREACH(n2, ncpp, nc_hash) { if (n2->nc_dvp =3D=3D dvp && n2->nc_nlen =3D=3D cnp->cn_namelen && - !bcmp(n2->nc_name, cnp->cn_nameptr, n2->nc_nlen)) { - n2->nc_time =3D ncp->nc_time; - n2->nc_ticks =3D ncp->nc_ticks; + !bcmp(nc_get_name(n2), cnp->cn_nameptr, n2->nc_nlen)) { + if (tsp !=3D NULL) { + KASSERT((n2->nc_flag & NCF_TS) !=3D 0, + ("no NCF_TS")); + n3 =3D (struct namecache_ts *)n2; + n3->nc_time =3D + ((struct namecache_ts *)ncp)->nc_time; + n3->nc_ticks =3D + ((struct namecache_ts *)ncp)->nc_ticks; + } CACHE_WUNLOCK(); cache_free(ncp); return; @@ -792,12 +862,12 @@ cache_enter_time(dvp, vp, cnp, tsp) */ if (vp) { TAILQ_INSERT_HEAD(&vp->v_cache_dst, ncp, nc_dst); - SDT_PROBE(vfs, namecache, enter, done, dvp, ncp->nc_name, vp, - 0, 0); + SDT_PROBE(vfs, namecache, enter, done, dvp, nc_get_name(ncp), + vp, 0, 0); } else { TAILQ_INSERT_TAIL(&ncneg, ncp, nc_dst); SDT_PROBE(vfs, namecache, enter_negative, done, dvp, - ncp->nc_name, 0, 0, 0); + nc_get_name(ncp), 0, 0, 0); } if (numneg * ncnegfactor > numcache) { ncp =3D TAILQ_FIRST(&ncneg); @@ -819,10 +889,15 @@ nchinit(void *dummy __unused) =20 TAILQ_INIT(&ncneg); =20 - cache_zone_small =3D uma_zcreate("S VFS Cache", CACHE_ZONE_SMALL, NULL, - NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_ZINIT); - cache_zone_large =3D uma_zcreate("L VFS Cache", CACHE_ZONE_LARGE, NULL, - NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_ZINIT); + cache_zone_small =3D uma_zcreate("S VFS Cache", + sizeof(struct namecache) + CACHE_PATH_CUTOFF + 1, + NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_ZINIT); + cache_zone_small_ts =3D uma_zcreate("STS VFS Cache", + sizeof(struct namecache_ts) + CACHE_PATH_CUTOFF + 1, + NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_ZINIT); + cache_zone_large =3D uma_zcreate("L VFS Cache", + sizeof(struct namecache_ts) + NAME_MAX + 1, + NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_ZINIT); =20 nchashtbl =3D hashinit(desiredvnodes * 2, M_VFSCACHE, &nchash); } @@ -1126,9 +1201,9 @@ vn_vptocnp_locked(struct vnode **vp, struct ucred *cr= ed, char *buf, return (error); } *buflen -=3D ncp->nc_nlen; - memcpy(buf + *buflen, ncp->nc_name, ncp->nc_nlen); + memcpy(buf + *buflen, nc_get_name(ncp), ncp->nc_nlen); SDT_PROBE(vfs, namecache, fullpath, hit, ncp->nc_dvp, - ncp->nc_name, vp, 0, 0); + nc_get_name(ncp), vp, 0, 0); dvp =3D *vp; *vp =3D ncp->nc_dvp; vref(*vp); @@ -1301,7 +1376,7 @@ vn_commname(struct vnode *vp, char *buf, u_int buflen) return (ENOENT); } l =3D min(ncp->nc_nlen, buflen - 1); - memcpy(buf, ncp->nc_name, l); + memcpy(buf, nc_get_name(ncp), l); CACHE_RUNLOCK(); buf[l] =3D '\0'; return (0); --WWZbIj0qUfl+CqYL Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEUEARECAAYFAk8ac4kACgkQC3+MBN1Mb4jkYgCWO9MfVJ4mkdtj82/pzDgnL6Xh QwCgljVaycgmJVN79e0ObL9ArI6iTBs= =mM5R -----END PGP SIGNATURE----- --WWZbIj0qUfl+CqYL-- From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 10:33:24 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 897AF106566C for ; Sat, 21 Jan 2012 10:33:24 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id 113C68FC0C for ; Sat, 21 Jan 2012 10:33:23 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id q0LAXDG5085194 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Sat, 21 Jan 2012 12:33:19 +0200 (EET) (envelope-from daniel@digsys.bg) Message-ID: <4F1A9469.1060605@digsys.bg> Date: Sat, 21 Jan 2012 12:33:13 +0200 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0) Gecko/20111228 Thunderbird/9.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <4F192ADA.5020903@brockmann-consult.de> <1327069331.29444.4.camel@btw.pki2.com> <20120120153129.GA97746@icarus.home.lan> <1327077094.29408.11.camel@btw.pki2.com> <20120120181828.GA1049@icarus.home.lan> <1327117388.29408.24.camel@btw.pki2.com> In-Reply-To: <1327117388.29408.24.camel@btw.pki2.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: sanity check: is 9211-8i, on 8.3, with IT firmware still "the one" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 10:33:24 -0000 On 21.01.12 05:43, Dennis Glatting wrote: > I used Seatools on one of the disks from the first set > (ST1000DL002-9TT153). On a long test the tools declared there were > errors that it could not fix. I didn't see much point in trying the > second disk. You should be able to run the long tests via SMART using the smartmontools (smartctl) as well, in FreeBSD. There are a number of different tests (see man page) each doing different things -- unfortunately, they rarely do the same things on different make/model disks as well. But it is better to test the second disk to check if it has the same type of problem. > So, two separately purchased disks from the same vendor bad? > (TigerDirect) What's the odds of that? Hmm... It is not only possible, it is typical. This is why the recommendation to build dependable array is to use disks from different vendors/lots. It may be the manufacturing lot having undetected defects. It may be that during shipping, that group of disks experienced bad handling etc -- you may have purchased the disks at different times, yet they may come from the same manufacturing lot and shipment. Daniel From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 14:16:04 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 39F231065676 for ; Sat, 21 Jan 2012 14:16:04 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id 9A21D8FC12 for ; Sat, 21 Jan 2012 14:16:03 +0000 (UTC) Received: from [IPv6:2001:470:28:140:601f:f121:15c2:c359] ([IPv6:2001:470:28:140:601f:f121:15c2:c359]) (authenticated bits=0) by otrada.od.ua (8.14.4/8.14.5) with ESMTP id q0LEFwOC065758 for ; Sat, 21 Jan 2012 16:15:58 +0200 (EET) (envelope-from universite@ukr.net) Message-ID: <4F1AC88A.2070603@ukr.net> Date: Sat, 21 Jan 2012 16:15:38 +0200 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (otrada.od.ua [IPv6:2001:470:28:140::5]); Sat, 21 Jan 2012 16:15:58 +0200 (EET) X-Spam-Status: No, score=-94.3 required=5.0 tests=FREEMAIL_FROM,RDNS_NONE, SPF_SOFTFAIL, TO_NO_BRKTS_DIRECT, T_TO_NO_BRKTS_FREEMAIL, USER_IN_WHITELIST autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mary-teresa.otrada.od.ua Cc: Subject: Unrecognized error on zfs v28 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 14:16:04 -0000 zpool scrub zroot did not help. How to deal with such errors? # zpool upgrade zroot This system is currently running ZFS pool version 28. # zpool status -v zroot pool: zroot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scan: scrub repaired 0 in 1h54m with 0 errors on Sat Jan 21 00:37:16 2012 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 gpt/disk-system ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: /var/imap/vlad11/&BBEEOwQ+BDM-@&BB4EQgRABDAENAQw- # uname -a FreeBSD mary-teresa.XXX 8.2-STABLE FreeBSD 8.2-STABLE #0: Wed Jul 13 02:01:02 EEST 2011 root@mary-teresa.XXX:/usr/obj/usr/src/sys/XXX.4 amd64 # zfs get all zroot/var/imap NAME PROPERTY VALUE SOURCE zroot/var/imap type filesystem - zroot/var/imap creation вс май 1 2:40 2011 - zroot/var/imap used 272M - zroot/var/imap available 400G - zroot/var/imap referenced 206M - zroot/var/imap compressratio 2.23x - zroot/var/imap mounted yes - zroot/var/imap quota none default zroot/var/imap reservation none default zroot/var/imap recordsize 128K default zroot/var/imap mountpoint /var/imap inherited from zroot/var zroot/var/imap sharenfs off default zroot/var/imap checksum fletcher4 inherited from zroot zroot/var/imap compression gzip local zroot/var/imap atime on default zroot/var/imap devices on default zroot/var/imap exec off local zroot/var/imap setuid off local zroot/var/imap readonly off default zroot/var/imap jailed off default zroot/var/imap snapdir hidden default zroot/var/imap aclinherit restricted default zroot/var/imap canmount on default zroot/var/imap xattr off temporary zroot/var/imap copies 1 default zroot/var/imap version 5 - zroot/var/imap utf8only off - zroot/var/imap normalization none - zroot/var/imap casesensitivity sensitive - zroot/var/imap vscan off default zroot/var/imap nbmand off default zroot/var/imap sharesmb off default zroot/var/imap refquota none default zroot/var/imap refreservation none default zroot/var/imap primarycache all default zroot/var/imap secondarycache all default zroot/var/imap usedbysnapshots 66,2M - zroot/var/imap usedbydataset 206M - zroot/var/imap usedbychildren 0 - zroot/var/imap usedbyrefreservation 0 - zroot/var/imap logbias latency default zroot/var/imap dedup off inherited from zroot zroot/var/imap mlslabel - zroot/var/imap sync standard default zroot/var/imap refcompressratio 2.21x - -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 14:20:12 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3B8391065670 for ; Sat, 21 Jan 2012 14:20:12 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 82F178FC0C for ; Sat, 21 Jan 2012 14:20:11 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id QAA14646; Sat, 21 Jan 2012 16:20:07 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Robnb-000JSL-Em; Sat, 21 Jan 2012 16:20:07 +0200 Message-ID: <4F1AC995.7050506@FreeBSD.org> Date: Sat, 21 Jan 2012 16:20:05 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:9.0) Gecko/20111222 Thunderbird/9.0 MIME-Version: 1.0 To: Martin Ranne References: <39C592E81AEC0B418EAD826FC1BBB09B25031D@mailgate> <4F18459F.7040309@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B252444@mailgate> <4F1858FE.7020509@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B25253F@mailgate> <4F1878AC.6060704@FreeBSD.org> <39C592E81AEC0B418EAD826FC1BBB09B25284B@mailgate> In-Reply-To: <39C592E81AEC0B418EAD826FC1BBB09B25284B@mailgate> X-Enigmail-Version: undefined Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" Subject: Re: zpool import reboots computer X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 14:20:12 -0000 on 20/01/2012 11:09 Martin Ranne said the following: > I tried again to get into the debugger. It will not always work as it freezes before i get to the prompt most of the times but here it is. Any other commands to run in the debugger to get better information to help solve this? > > I used the command zpool import -F -f -o readonly=on -R /mnt/serv06 zroot > > Result is the following > Fatal trap 12: page fault while in kernel mode > Fatal trap 12: page fault while in kernel mode > cpuid = 0; cpuid = 5; apic id = 00 > apic id = 05 > fault virtual address = 0x38 > fault virtual address = 0x88 > fault code = supervisor read data, page not present > fault code = supervisor read data, page not present > instruction pointer = 0x20:0xffffffff814872a1 > instruction pointer = 0x20:0xffffffff814a7ef5 > stack pointer = 0x28:0xffffff8c0d564f00 > stack pointer = 0x28:0xffffff8c0ffd7ad0 > frame pointer = 0x28:0xffffff8c0d564f30 > frame pointer = 0x28:0xffffff8c0ffd7b40 > code segment = base 0x0, limit 0xfffff, type 0x1b > code segment = base 0x0, limit 0xfffff, type 0x1b > = DPL 0, pres 1, long 1, def32 0, gran 1 > = DPL 0, pres 1, long 1, def32 0, gran 1 > processor eflags = processor eflags = interrupt enabled, interrupt enabled, resume, resume, IOPL = 0 > IOPL = 0 > current process = current process = 0 (system_task1_3) > 26[ thread pid 0 tid 100099 ] > Stopped at vdev_is_dead+0x1: cmpq $0x5,0x28(%rdi) > db> bt > Tracing pid 0 tid 100099 td 0xfffffe000e546460 > vdev_is_dead() at vdev_is_dead+0x1 > vdev_mirror_child_select() at vdev_mirror_child_select+0x67 > vdev_mirror_io_start() at vdev_mirror_io_start+0x24c > zio_vdev_io_start() at zio_vdev_io_start+0x232 > zio_execute() at zio_execute+0xc3 > zio_gang_assemble() at zio_gang_assemble+0x1b > zio_execute() at zio_execute+0xc3 > arc_read_nolock() at arc_read_nolock+0x6d1 > arc_read() at arc_read+0x93 > traverse_prefetcher() at traverse_prefetcher+0x103 > traverse_visitbp() at traverse_visitbp+0x21c > traverse_dnode() at traverse_dnode+0x7c > traverse_visitbp() at traverse_visitbp+0x3ff > traverse_visitbp() at traverse_visitbp+0x316 > traverse_visitbp() at traverse_visitbp+0x316 > traverse_visitbp() at traverse_visitbp+0x316 > traverse_visitbp() at traverse_visitbp+0x316 > traverse_visitbp() at traverse_visitbp+0x316 > traverse_visitbp() at traverse_visitbp+0x316 > traverse_dnode() at traverse_dnode+0x7c > traverse_visitbp() at traverse_visitbp+0x48c > traverse_prefetch_thread() at traverse_prefetch_thread+0x78 > taskq_run() at taskq_run+0x13 > taskqueue_run_locked() at taskqueue_run_locked+0x85 > taskqueue_thread_loop() at taskqueue_thread_loop+0x46 > fork_exit() at fork_exit+0x11f > fork_trampoline() at fork_trampoline+0xe > --- trap 0, rip = 0, rsp = 0xffffff8c0d565d00, rbp = 0 --- > db> To me it looks like in the vdev_mirror_child_select function mc->mc_vd could be NULL although the code doesn't expect it. You can add some code to the function to check if the hypothesis is correct and to skip a loop if mc->mc_vd is NULL. Such a hack is probably not needed in general, but given that your pool could be corrupted, this could be your chance to get access to it. BTW, restoring from backups is what is usually recommended first in a situation like this. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 15:11:42 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1665F1065672 for ; Sat, 21 Jan 2012 15:11:42 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id D53858FC1C for ; Sat, 21 Jan 2012 15:11:41 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id q0LFBesF017387; Sat, 21 Jan 2012 09:11:40 -0600 (CST) Date: Sat, 21 Jan 2012 09:11:40 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: "Vladislav V. Prodan" In-Reply-To: <4F1AC88A.2070603@ukr.net> Message-ID: References: <4F1AC88A.2070603@ukr.net> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="-559023410-107008870-1327158700=:15666" X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Sat, 21 Jan 2012 09:11:40 -0600 (CST) Cc: fs@freebsd.org Subject: Re: Unrecognized error on zfs v28 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 15:11:42 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---559023410-107008870-1327158700=:15666 Content-Type: TEXT/PLAIN; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8BIT On Sat, 21 Jan 2012, Vladislav V. Prodan wrote: > zpool scrub zroot did not help. > How to deal with such errors? A recommendation for how to deal with the problem was provided in the zpool status "action" text. If you don't have a backup for the file, then an alternative is to just delete it and hope that you did not really need it. Your pool/filessytem does not include any data redundancy so it is not able to repair bad data. If you had set the zfs filesystem attribute 'copies=2' then zfs would likely have been able to recover this data, even though you only have one disk, but more disk space would have been consumed. Bob > > # zpool upgrade zroot > This system is currently running ZFS pool version 28. > > > # zpool status -v zroot > pool: zroot > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://www.sun.com/msg/ZFS-8000-8A > scan: scrub repaired 0 in 1h54m with 0 errors on Sat Jan 21 00:37:16 2012 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > gpt/disk-system ONLINE 0 0 0 > > errors: Permanent errors have been detected in the following files: > > /var/imap/vlad11/&BBEEOwQ+BDM-@&BB4EQgRABDAENAQw- > > # uname -a > FreeBSD mary-teresa.XXX 8.2-STABLE FreeBSD 8.2-STABLE #0: Wed Jul 13 > 02:01:02 EEST 2011 root@mary-teresa.XXX:/usr/obj/usr/src/sys/XXX.4 > amd64 > > # zfs get all zroot/var/imap > NAME PROPERTY VALUE SOURCE > zroot/var/imap type filesystem - > zroot/var/imap creation вс май 1 2:40 2011 - > zroot/var/imap used 272M - > zroot/var/imap available 400G - > zroot/var/imap referenced 206M - > zroot/var/imap compressratio 2.23x - > zroot/var/imap mounted yes - > zroot/var/imap quota none default > zroot/var/imap reservation none default > zroot/var/imap recordsize 128K default > zroot/var/imap mountpoint /var/imap inherited > from zroot/var > zroot/var/imap sharenfs off default > zroot/var/imap checksum fletcher4 inherited > from zroot > zroot/var/imap compression gzip local > zroot/var/imap atime on default > zroot/var/imap devices on default > zroot/var/imap exec off local > zroot/var/imap setuid off local > zroot/var/imap readonly off default > zroot/var/imap jailed off default > zroot/var/imap snapdir hidden default > zroot/var/imap aclinherit restricted default > zroot/var/imap canmount on default > zroot/var/imap xattr off temporary > zroot/var/imap copies 1 default > zroot/var/imap version 5 - > zroot/var/imap utf8only off - > zroot/var/imap normalization none - > zroot/var/imap casesensitivity sensitive - > zroot/var/imap vscan off default > zroot/var/imap nbmand off default > zroot/var/imap sharesmb off default > zroot/var/imap refquota none default > zroot/var/imap refreservation none default > zroot/var/imap primarycache all default > zroot/var/imap secondarycache all default > zroot/var/imap usedbysnapshots 66,2M - > zroot/var/imap usedbydataset 206M - > zroot/var/imap usedbychildren 0 - > zroot/var/imap usedbyrefreservation 0 - > zroot/var/imap logbias latency default > zroot/var/imap dedup off inherited > from zroot > zroot/var/imap mlslabel - > zroot/var/imap sync standard default > zroot/var/imap refcompressratio 2.21x - > > -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ ---559023410-107008870-1327158700=:15666-- From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 15:45:54 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BE1041065673 for ; Sat, 21 Jan 2012 15:45:54 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 67E618FC12 for ; Sat, 21 Jan 2012 15:45:54 +0000 (UTC) Received: from outgoing.leidinger.net (p4FC43C3D.dip.t-dialin.net [79.196.60.61]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id CD6DC844017; Sat, 21 Jan 2012 16:29:10 +0100 (CET) Received: from unknown (IO.Leidinger.net [192.168.1.12]) by outgoing.leidinger.net (Postfix) with ESMTP id 1533614DF; Sat, 21 Jan 2012 16:29:08 +0100 (CET) Date: Sat, 21 Jan 2012 16:29:06 +0100 From: Alexander Leidinger To: Willem Jan Withagen Message-ID: <20120121162906.0000518c@unknown> In-Reply-To: <4F193D90.9020703@digiware.nl> References: <4F193D90.9020703@digiware.nl> X-Mailer: Claws Mail 3.7.10cvs42 (GTK+ 2.16.6; i586-pc-mingw32msvc) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: CD6DC844017.A16A1 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-0.923, required 6, autolearn=disabled, ALL_TRUSTED -1.00, TW_ZF 0.08) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1327764551.56554@HZOyKLCGwxL7zlm1ZLFH2w X-EBL-Spam-Status: No Cc: fs@freebsd.org Subject: Re: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 15:45:54 -0000 On Fri, 20 Jan 2012 11:10:24 +0100 Willem Jan Withagen wrote: > Now my question is more about the SSD configuration. > (BTW adding 1 SSD got the insert rate up from 100/sec to > 1000/sec, > once the cache was loaded.) > > The database is on a mirror of 2 1T disks: > ada0: ATA-8 SATA 3.x device > ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) > ada0: Command Queueing enabled > > and there are 2 SSDs: > ada2: ATA-8 SATA 2.x device > ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) > ada2: Command Queueing enabled > > What I've currently done is partition all disks (also the SSDs) with > GPT like below: > batman# zpool iostat -v > capacity operations bandwidth > pool alloc free read write read write > ------------- ----- ----- ----- ----- ----- ----- > zfsboot 50.0G 49.5G 1 13 46.0K 164K > mirror 50.0G 49.5G 1 13 46.0K 164K > gpt/boot4 - - 0 5 23.0K 164K > gpt/boot6 - - 0 5 22.9K 164K > ------------- ----- ----- ----- ----- ----- ----- > zfsdata 59.4G 765G 12 62 250K 1.30M > mirror 59.4G 765G 12 62 250K 1.30M > gpt/data4 - - 5 15 127K 1.30M > gpt/data6 - - 5 15 127K 1.30M > gpt/log2 11M 1005M 0 22 12 653K > gpt/log3 11.1M 1005M 0 22 12 652K Do you have two log devices in non-mirrored mode? If yes, it would be better to have the ZIL mirrored on a pair. > cache - - - - - - > gpt/cache2 9.99G 26.3G 27 53 1.20M 5.30M > gpt/cache3 9.85G 26.4G 28 54 1.24M 5.23M > ------------- ----- ----- ----- ----- ----- ----- > > disks 4 and 6 are naming remains of pre ahci times and are ada0 and > ada1. So the hardisks have the "std" zfs setup: a boot-pool and a > data-pool. > > The SSD's if partitioned and assigned to zfsdata with: > gpart create -s GPT ada2 > gpart create -s GPT ada3 > gpart add -t freebsd-zfs -l log2 -s 1G ada2 > gpart add -t freebsd-zfs -l log3 -s 1G ada3 > gpart add -t freebsd-zfs -l cache2 ada2 > gpart add -t freebsd-zfs -l cache3 ada3 > zpool add zfsdata log /dev/gpt/log* > zpool add zfsdata cache /dev/gpt/cache* > > Now the question would be are the GPT partitions correctly aligned to > give optimal performance? I would assume that the native block size of the flash is more like 4kb than 512b. As such just creating the GPT partitions will not be the best setup. See http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/ for a description how to align to 4k sectors. I do not know if the main devices of the pool need to be setup with an emulated 4k size (the gnop part in my description) or not, but I would assume all disks in the pool needs to be setup with the temporary gnop setup. > The harddisks are still std 512byte sectors, so that would be alright? > The SSD's I have my doubts..... You could assume that the majority of cases are 4k or bigger writes (tune your MySQL this way, and do not forget to change the recordsize of the zfs dataset which contains the db files to match what the DB writes) and just align the partitions of the SSDs for 4k (do not use the gnop part in my description). I would assume that this already gives good performance in most cases. > Good thing is that v28 allow you to toy with log and cache without > loosing data. So I could redo the recreation of cache and log > relatively easy. You can still lose data when a log SSD dies (if they are not mirrored). > I'd rather not redo the DB build since that takes a few days. :( > But before loading the DB, I did use some of the tuning suggestions > like using different recordsize for db-logs and innodb files. Bye, Alexander. -- http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 16:07:04 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3AB2A1065670 for ; Sat, 21 Jan 2012 16:07:04 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta03.westchester.pa.mail.comcast.net (qmta03.westchester.pa.mail.comcast.net [76.96.62.32]) by mx1.freebsd.org (Postfix) with ESMTP id C9E3E8FC08 for ; Sat, 21 Jan 2012 16:07:03 +0000 (UTC) Received: from omta20.westchester.pa.mail.comcast.net ([76.96.62.71]) by qmta03.westchester.pa.mail.comcast.net with comcast id QE3n1i0021YDfWL53FtoQd; Sat, 21 Jan 2012 15:53:48 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta20.westchester.pa.mail.comcast.net with comcast id QFtn1i00K1t3BNj3gFtofC; Sat, 21 Jan 2012 15:53:48 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 637C8102C19; Sat, 21 Jan 2012 07:53:46 -0800 (PST) Date: Sat, 21 Jan 2012 07:53:46 -0800 From: Jeremy Chadwick To: Bob Friesenhahn Message-ID: <20120121155346.GA39342@icarus.home.lan> References: <4F1AC88A.2070603@ukr.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: fs@freebsd.org Subject: Re: Unrecognized error on zfs v28 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 16:07:04 -0000 On Sat, Jan 21, 2012 at 09:11:40AM -0600, Bob Friesenhahn wrote: > On Sat, 21 Jan 2012, Vladislav V. Prodan wrote: > > >zpool scrub zroot did not help. > >How to deal with such errors? > > A recommendation for how to deal with the problem was provided in > the zpool status "action" text. If you don't have a backup for the > file, then an alternative is to just delete it and hope that you did > not really need it. > > Your pool/filessytem does not include any data redundancy so it is > not able to repair bad data. > > If you had set the zfs filesystem attribute 'copies=2' then zfs > would likely have been able to recover this data, even though you > only have one disk, but more disk space would have been consumed. > ># zpool upgrade zroot > >This system is currently running ZFS pool version 28. > > > > > ># zpool status -v zroot > > pool: zroot > >state: ONLINE > >status: One or more devices has experienced an error resulting in data > > corruption. Applications may be affected. > >action: Restore the file in question if possible. Otherwise restore the > > entire pool from backup. > > see: http://www.sun.com/msg/ZFS-8000-8A > >scan: scrub repaired 0 in 1h54m with 0 errors on Sat Jan 21 00:37:16 2012 > >config: > > > > NAME STATE READ WRITE CKSUM > > zroot ONLINE 0 0 0 > > gpt/disk-system ONLINE 0 0 0 > > > >errors: Permanent errors have been detected in the following files: > > > > /var/imap/vlad11/&BBEEOwQ+BDM-@&BB4EQgRABDAENAQw- > > > ># uname -a > >FreeBSD mary-teresa.XXX 8.2-STABLE FreeBSD 8.2-STABLE #0: Wed Jul 13 > >02:01:02 EEST 2011 root@mary-teresa.XXX:/usr/obj/usr/src/sys/XXX.4 > >amd64 > > > ># zfs get all zroot/var/imap > >NAME PROPERTY VALUE SOURCE > >zroot/var/imap type filesystem - > >zroot/var/imap creation ???? ?????? 1 2:40 2011 - > >zroot/var/imap used 272M - > >zroot/var/imap available 400G - > >zroot/var/imap referenced 206M - > >zroot/var/imap compressratio 2.23x - > >zroot/var/imap mounted yes - > >zroot/var/imap quota none default > >zroot/var/imap reservation none default > >zroot/var/imap recordsize 128K default > >zroot/var/imap mountpoint /var/imap inherited > >from zroot/var > >zroot/var/imap sharenfs off default > >zroot/var/imap checksum fletcher4 inherited > >from zroot > >zroot/var/imap compression gzip local > >zroot/var/imap atime on default > >zroot/var/imap devices on default > >zroot/var/imap exec off local > >zroot/var/imap setuid off local > >zroot/var/imap readonly off default > >zroot/var/imap jailed off default > >zroot/var/imap snapdir hidden default > >zroot/var/imap aclinherit restricted default > >zroot/var/imap canmount on default > >zroot/var/imap xattr off temporary > >zroot/var/imap copies 1 default > >zroot/var/imap version 5 - > >zroot/var/imap utf8only off - > >zroot/var/imap normalization none - > >zroot/var/imap casesensitivity sensitive - > >zroot/var/imap vscan off default > >zroot/var/imap nbmand off default > >zroot/var/imap sharesmb off default > >zroot/var/imap refquota none default > >zroot/var/imap refreservation none default > >zroot/var/imap primarycache all default > >zroot/var/imap secondarycache all default > >zroot/var/imap usedbysnapshots 66,2M - > >zroot/var/imap usedbydataset 206M - > >zroot/var/imap usedbychildren 0 - > >zroot/var/imap usedbyrefreservation 0 - > >zroot/var/imap logbias latency default > >zroot/var/imap dedup off inherited > >from zroot > >zroot/var/imap mlslabel - > >zroot/var/imap sync standard default > >zroot/var/imap refcompressratio 2.21x - Bob, one thing to note is that the R/W/CK counters on his pool as well as his (single) device are all zero. I don't know if the OP did "zpool clear" or not, but if he didn't, then I'm curious how said permanent errors were detected. On Solaris the only time I've seen the above message (specifically that a single-disk vdev experienced loss of files/data) is when there was a non-zero R/W/CK count on either the pool or the device. Sadly the OP did not provide details of what gpt/disk-system is (what hardware's attached to said GPT, etc.), but I'd still expect to see an incremented counter. Also worth noting is that the OP is using compression. That makes me wonder if there may be a bug of some kind there that results in the problem, rather than an actual device I/O error. Finally, the RELENG_8 version he's using is from July 2011. I'm not sure if there have been any fixups to things like this since. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 16:46:47 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BED2C106566C for ; Sat, 21 Jan 2012 16:46:47 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id 3F5B18FC0C for ; Sat, 21 Jan 2012 16:46:47 +0000 (UTC) Received: from [IPv6:2001:470:28:140:601f:f121:15c2:c359] ([IPv6:2001:470:28:140:601f:f121:15c2:c359]) (authenticated bits=0) by otrada.od.ua (8.14.4/8.14.5) with ESMTP id q0LGkbr1093563; Sat, 21 Jan 2012 18:46:37 +0200 (EET) (envelope-from universite@ukr.net) Message-ID: <4F1AEBD9.5080301@ukr.net> Date: Sat, 21 Jan 2012 18:46:17 +0200 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: Jeremy Chadwick References: <4F1AC88A.2070603@ukr.net> <20120121155346.GA39342@icarus.home.lan> In-Reply-To: <20120121155346.GA39342@icarus.home.lan> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (otrada.od.ua [IPv6:2001:470:28:140::5]); Sat, 21 Jan 2012 18:46:40 +0200 (EET) X-Spam-Status: No, score=-97.8 required=5.0 tests=FREEMAIL_FROM,RDNS_NONE, SPF_SOFTFAIL,USER_IN_WHITELIST autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mary-teresa.otrada.od.ua Cc: fs@freebsd.org Subject: Re: Unrecognized error on zfs v28 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 16:46:47 -0000 21.01.2012 17:53, Jeremy Chadwick wrote: > Sadly the OP did not provide details of what gpt/disk-system is (what > hardware's attached to said GPT, etc.), but I'd still expect to see an > incremented counter. > # smartctl -a /dev/ad4 smartctl 5.40 2010-10-16 r3189 [FreeBSD 8.2-STABLE amd64] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA family Device Model: WDC WD5000AAKS-00TMA0 Serial Number: WD-WCAPW2776439 Firmware Version: 12.01C01 User Capacity: 500 106 780 160 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sat Jan 21 18:44:41 2012 EET SMART support is: Available - device has SMART capability. SMART support is: Enabled # gpart show ad4 => 34 976770988 ad4 GPT (465G) 34 128 1 freebsd-boot (64k) 162 8388608 2 freebsd-swap (4.0G) 8388770 968382252 3 freebsd-zfs (461G) # gpart list ad4 Geom name: ad4 modified: false state: OK fwheads: 16 fwsectors: 63 last: 976771021 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ad4p1 Mediasize: 65536 (64k) Sectorsize: 512 Stripesize: 0 Stripeoffset: 17408 Mode: r0w0e0 rawuuid: fd6252dc-b801-11dc-bab0-001a4d5c374a rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: boot length: 65536 offset: 17408 type: freebsd-boot index: 1 end: 161 start: 34 2. Name: ad4p2 Mediasize: 4294967296 (4.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e1 rawuuid: 1ae064a2-b802-11dc-bab0-001a4d5c374a rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap length: 4294967296 offset: 82944 type: freebsd-swap index: 2 end: 8388769 start: 162 3. Name: ad4p3 Mediasize: 495811713024 (461G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 82944 Mode: r1w1e2 rawuuid: b2ec8a88-b806-11dc-bab0-001a4d5c374a rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: disk-system length: 495811713024 offset: 4295050240 type: freebsd-zfs index: 3 end: 976771021 start: 8388770 Consumers: 1. Name: ad4 Mediasize: 500106780160 (465G) Sectorsize: 512 Mode: r2w2e5 -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 18:18:32 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 49BF01065672 for ; Sat, 21 Jan 2012 18:18:32 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from mail.digiware.nl (mail.ip6.digiware.nl [IPv6:2001:4cb8:1:106::2]) by mx1.freebsd.org (Postfix) with ESMTP id BC5D98FC0A for ; Sat, 21 Jan 2012 18:18:31 +0000 (UTC) Received: from rack1.digiware.nl (localhost.digiware.nl [127.0.0.1]) by mail.digiware.nl (Postfix) with ESMTP id E51C1153434; Sat, 21 Jan 2012 19:18:29 +0100 (CET) X-Virus-Scanned: amavisd-new at digiware.nl Received: from mail.digiware.nl ([127.0.0.1]) by rack1.digiware.nl (rack1.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id c-OI8xr3YNG3; Sat, 21 Jan 2012 19:18:28 +0100 (CET) Received: from [192.168.10.10] (vaio [192.168.10.10]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mail.digiware.nl (Postfix) with ESMTPSA id A3BA5153433; Sat, 21 Jan 2012 19:18:28 +0100 (CET) Message-ID: <4F1B0177.8080909@digiware.nl> Date: Sat, 21 Jan 2012 19:18:31 +0100 From: Willem Jan Withagen User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: Alexander Leidinger References: <4F193D90.9020703@digiware.nl> <20120121162906.0000518c@unknown> In-Reply-To: <20120121162906.0000518c@unknown> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: fs@freebsd.org Subject: Re: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 18:18:32 -0000 On 21-1-2012 16:29, Alexander Leidinger wrote: >> What I've currently done is partition all disks (also the SSDs) with >> GPT like below: >> batman# zpool iostat -v >> capacity operations bandwidth >> pool alloc free read write read write >> ------------- ----- ----- ----- ----- ----- ----- >> zfsboot 50.0G 49.5G 1 13 46.0K 164K >> mirror 50.0G 49.5G 1 13 46.0K 164K >> gpt/boot4 - - 0 5 23.0K 164K >> gpt/boot6 - - 0 5 22.9K 164K >> ------------- ----- ----- ----- ----- ----- ----- >> zfsdata 59.4G 765G 12 62 250K 1.30M >> mirror 59.4G 765G 12 62 250K 1.30M >> gpt/data4 - - 5 15 127K 1.30M >> gpt/data6 - - 5 15 127K 1.30M >> gpt/log2 11M 1005M 0 22 12 653K >> gpt/log3 11.1M 1005M 0 22 12 652K > > Do you have two log devices in non-mirrored mode? If yes, it would be > better to have the ZIL mirrored on a pair. So what you are saying is that logging is faster in mirrored mode? Our are you more concerned out losing the the LOG en thus possible losing data. >> cache - - - - - - >> gpt/cache2 9.99G 26.3G 27 53 1.20M 5.30M >> gpt/cache3 9.85G 26.4G 28 54 1.24M 5.23M >> ------------- ----- ----- ----- ----- ----- ----- .... >> Now the question would be are the GPT partitions correctly aligned to >> give optimal performance? > > I would assume that the native block size of the flash is more like 4kb > than 512b. As such just creating the GPT partitions will not be the > best setup. Corsair reports: Max Random 4k Write (using IOMeter 08): 50k IOPS (4k aligned) So I guess that suggests 4k aligned is required. > See > http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/ > for a description how to align to 4k sectors. I do not know if the main > devices of the pool need to be setup with an emulated 4k size (the gnop > part in my description) or not, but I would assume all disks in the > pool needs to be setup with the temporary gnop setup. Well one way of resetting up the harddisks would be to remove them from the mirror each in turn. Repartion, and then rebuild the mirror, hoping that that would work, since I need some extra space to move the partitions up. :( >> The harddisks are still std 512byte sectors, so that would be alright? >> The SSD's I have my doubts..... > > You could assume that the majority of cases are 4k or bigger writes > (tune your MySQL this way, and do not forget to change the recordsize > of the zfs dataset which contains the db files to match what the DB > writes) and just align the partitions of the SSDs for 4k (do not use > the gnop part in my description). I would assume that this already > gives good performance in most cases. I'll redo the SSD's with the suggestions from your page. >> Good thing is that v28 allow you to toy with log and cache without >> loosing data. So I could redo the recreation of cache and log >> relatively easy. > > You can still lose data when a log SSD dies (if they are not mirrored). I was more refering to the fact that under v28, one is able to remove log and cache thru zpool commands without loosing data. Just pulling the disks is of course going to corrupt data. --WjW From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 20:12:40 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4CAAE106566B for ; Sat, 21 Jan 2012 20:12:40 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id 8F5B88FC12 for ; Sat, 21 Jan 2012 20:12:39 +0000 (UTC) Received: from [IPv6:2001:470:28:140:601f:f121:15c2:c359] ([IPv6:2001:470:28:140:601f:f121:15c2:c359]) (authenticated bits=0) by otrada.od.ua (8.14.4/8.14.5) with ESMTP id q0LKCV3D006986; Sat, 21 Jan 2012 22:12:31 +0200 (EET) (envelope-from universite@ukr.net) Message-ID: <4F1B1C1B.9020000@ukr.net> Date: Sat, 21 Jan 2012 22:12:11 +0200 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: Bob Friesenhahn References: <4F1AC88A.2070603@ukr.net> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (otrada.od.ua [IPv6:2001:470:28:140::5]); Sat, 21 Jan 2012 22:12:32 +0200 (EET) X-Spam-Status: No, score=-97.8 required=5.0 tests=FREEMAIL_FROM,RDNS_NONE, SPF_SOFTFAIL,USER_IN_WHITELIST autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mary-teresa.otrada.od.ua Cc: fs@freebsd.org Subject: Re: Unrecognized error on zfs v28 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 20:12:40 -0000 21.01.2012 17:11, Bob Friesenhahn wrote: > On Sat, 21 Jan 2012, Vladislav V. Prodan wrote: > > A recommendation for how to deal with the problem was provided in the > zpool status "action" text. If you don't have a backup for the file, > then an alternative is to just delete it and hope that you did not > really need it. I moved the text file and started again zpool scrub zroot. Is it possible to somehow automate removal of a large number of "problem" files? > > Your pool/filessytem does not include any data redundancy so it is not > able to repair bad data. > > If you had set the zfs filesystem attribute 'copies=2' then zfs would > likely have been able to recover this data, even though you only have > one disk, but more disk space would have been consumed. > Is it possible that, using zfs set copies=2 , run auto-cloning data without having to move to another location and moving back to the old place? -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 20:33:20 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1545F1065670 for ; Sat, 21 Jan 2012 20:33:20 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id 939FA8FC08 for ; Sat, 21 Jan 2012 20:33:18 +0000 (UTC) Received: from digsys226-136.pip.digsys.bg (digsys226-136.pip.digsys.bg [193.68.136.226]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id q0LJvmU5087526 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Sat, 21 Jan 2012 21:57:54 +0200 (EET) (envelope-from daniel@digsys.bg) Mime-Version: 1.0 (Apple Message framework v1251.1) Content-Type: text/plain; charset=iso-8859-1 From: Daniel Kalchev In-Reply-To: <4F1B0177.8080909@digiware.nl> Date: Sat, 21 Jan 2012 21:57:52 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <4F193D90.9020703@digiware.nl> <20120121162906.0000518c@unknown> <4F1B0177.8080909@digiware.nl> To: Willem Jan Withagen X-Mailer: Apple Mail (2.1251.1) Cc: Alexander Leidinger , fs@freebsd.org Subject: Re: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 20:33:20 -0000 On Jan 21, 2012, at 8:18 PM, Willem Jan Withagen wrote: > On 21-1-2012 16:29, Alexander Leidinger wrote: >>>=20 >=20 >> See >> = http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimiz= ed-for-4k-sector-drives/ >> for a description how to align to 4k sectors. I do not know if the = main >> devices of the pool need to be setup with an emulated 4k size (the = gnop >> part in my description) or not, but I would assume all disks in the >> pool needs to be setup with the temporary gnop setup. >=20 > Well one way of resetting up the harddisks would be to remove them = from > the mirror each in turn. Repartion, and then rebuild the mirror, = hoping > that that would work, since I need some extra space to move the > partitions up. :( With ZFS, the 'alignment' is on per-vdev -- therefore you will need to = recreate the mirror vdevs again using gnop to make them 4k aligned.=20 Only one disk in a vdev needs to be 'gnop-ed' to 4k sectors because ZFS = uses the largest sector size from all devices in a vdev as the vdev = 'ashift' at creation time. Daniel= From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 20:39:52 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 554DE106566B for ; Sat, 21 Jan 2012 20:39:52 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id 19E1B8FC18 for ; Sat, 21 Jan 2012 20:39:51 +0000 (UTC) Received: by iagz16 with SMTP id z16so4065844iag.13 for ; Sat, 21 Jan 2012 12:39:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=W+1t/M1FbR9SyHpDIp9Jx8TfypQscbPwhF6N8LAsaBs=; b=hVob5DGEBLN1nqlHw/GYvGuAAvY8sEceRZspqolmxkMbrx6rwCL2VukgyR8+N/po5V Fu8+ta2haGd/sjZd3v3Lf4+ZHgjE0ol3c5CrdS2E1zt6pPy24VLGwGZqcUng8nds+yG2 3eUUYkA7CufopBxJ3YNCJLhIiZFUpZLKi5984= Received: by 10.50.156.138 with SMTP id we10mr4094483igb.10.1327177006323; Sat, 21 Jan 2012 12:16:46 -0800 (PST) Received: from [10.1.10.22] (c-76-102-48-155.hsd1.ca.comcast.net. [76.102.48.155]) by mx.google.com with ESMTPS id cv10sm6046790igc.0.2012.01.21.12.16.44 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 21 Jan 2012 12:16:45 -0800 (PST) Mime-Version: 1.0 (Apple Message framework v1251.1) Content-Type: text/plain; charset=us-ascii From: Steven Schlansker In-Reply-To: <4F1B1C1B.9020000@ukr.net> Date: Sat, 21 Jan 2012 12:16:43 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <8B17CEC7-F1B2-430C-BB3D-556B9B5C8FF5@gmail.com> References: <4F1AC88A.2070603@ukr.net> <4F1B1C1B.9020000@ukr.net> To: "Vladislav V. Prodan" X-Mailer: Apple Mail (2.1251.1) Cc: fs@freebsd.org Subject: Re: Unrecognized error on zfs v28 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 20:39:52 -0000 On Jan 21, 2012, at 12:12 PM, Vladislav V. Prodan wrote: > 21.01.2012 17:11, Bob Friesenhahn wrote: >> On Sat, 21 Jan 2012, Vladislav V. Prodan wrote: >>=20 >> A recommendation for how to deal with the problem was provided in the >> zpool status "action" text. If you don't have a backup for the file, >> then an alternative is to just delete it and hope that you did not >> really need it. >=20 > I moved the text file and started again zpool scrub zroot. >=20 > Is it possible to somehow automate removal of a large number of > "problem" files? You could parse the error message using e.g. sed and/or awk, pipe to = xargs rm. From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 22:06:35 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C15F7106566B for ; Sat, 21 Jan 2012 22:06:35 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 6A4938FC0C for ; Sat, 21 Jan 2012 22:06:35 +0000 (UTC) Received: from outgoing.leidinger.net (p4FC43C3D.dip.t-dialin.net [79.196.60.61]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id 9135C844017; Sat, 21 Jan 2012 23:06:21 +0100 (CET) Received: from unknown (IO.Leidinger.net [192.168.1.12]) by outgoing.leidinger.net (Postfix) with ESMTP id CF4C31509; Sat, 21 Jan 2012 23:06:18 +0100 (CET) Date: Sat, 21 Jan 2012 23:06:16 +0100 From: Alexander Leidinger To: Willem Jan Withagen Message-ID: <20120121230616.00006267@unknown> In-Reply-To: <4F1B0177.8080909@digiware.nl> References: <4F193D90.9020703@digiware.nl> <20120121162906.0000518c@unknown> <4F1B0177.8080909@digiware.nl> X-Mailer: Claws Mail 3.7.10cvs42 (GTK+ 2.16.6; i586-pc-mingw32msvc) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: 9135C844017.AFA81 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-0.923, required 6, autolearn=disabled, ALL_TRUSTED -1.00, TW_ZF 0.08) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1327788382.37017@zxd58SSEhbYufFWfL3QI6w X-EBL-Spam-Status: No Cc: fs@freebsd.org Subject: Re: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 22:06:35 -0000 On Sat, 21 Jan 2012 19:18:31 +0100 Willem Jan Withagen wrote: > On 21-1-2012 16:29, Alexander Leidinger wrote: > >> What I've currently done is partition all disks (also the SSDs) > >> with GPT like below: > >> batman# zpool iostat -v > >> capacity operations bandwidth > >> pool alloc free read write read write > >> ------------- ----- ----- ----- ----- ----- ----- > >> zfsboot 50.0G 49.5G 1 13 46.0K 164K > >> mirror 50.0G 49.5G 1 13 46.0K 164K > >> gpt/boot4 - - 0 5 23.0K 164K > >> gpt/boot6 - - 0 5 22.9K 164K > >> ------------- ----- ----- ----- ----- ----- ----- > >> zfsdata 59.4G 765G 12 62 250K 1.30M > >> mirror 59.4G 765G 12 62 250K 1.30M > >> gpt/data4 - - 5 15 127K 1.30M > >> gpt/data6 - - 5 15 127K 1.30M > >> gpt/log2 11M 1005M 0 22 12 653K > >> gpt/log3 11.1M 1005M 0 22 12 652K > > > > Do you have two log devices in non-mirrored mode? If yes, it would > > be better to have the ZIL mirrored on a pair. > > So what you are saying is that logging is faster in mirrored mode? No. > Our are you more concerned out losing the the LOG en thus possible > losing data. Yes. If one piece of the involved hardware dies, you lose data. > >> cache - - - - - - > >> gpt/cache2 9.99G 26.3G 27 53 1.20M 5.30M > >> gpt/cache3 9.85G 26.4G 28 54 1.24M 5.23M > >> ------------- ----- ----- ----- ----- ----- ----- > .... > > >> Now the question would be are the GPT partitions correctly aligned > >> to give optimal performance? > > > > I would assume that the native block size of the flash is more like > > 4kb than 512b. As such just creating the GPT partitions will not be > > the best setup. > > Corsair reports: > Max Random 4k Write (using IOMeter 08): 50k IOPS (4k aligned) > So I guess that suggests 4k aligned is required. Sounds like it is. > > See > > http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/ > > for a description how to align to 4k sectors. I do not know if the > > main devices of the pool need to be setup with an emulated 4k size > > (the gnop part in my description) or not, but I would assume all > > disks in the pool needs to be setup with the temporary gnop setup. > > Well one way of resetting up the harddisks would be to remove them > from the mirror each in turn. Repartion, and then rebuild the mirror, > hoping that that would work, since I need some extra space to move the > partitions up. :( Already answered by someone else, but I want to point out again, that if you have the critical writes 4k aligned and they are mostly 4k or bigger in size, you could be lucky. You could compare the zpool iostat output with the gstat output of the disks. If they more or less match, you are lucky. If the gstat output is bigger, you are in the unlucky case. > >> The harddisks are still std 512byte sectors, so that would be > >> alright? The SSD's I have my doubts..... > > > > You could assume that the majority of cases are 4k or bigger writes > > (tune your MySQL this way, and do not forget to change the > > recordsize of the zfs dataset which contains the db files to match > > what the DB writes) and just align the partitions of the SSDs for > > 4k (do not use the gnop part in my description). I would assume > > that this already gives good performance in most cases. > > I'll redo the SSD's with the suggestions from your page. > > >> Good thing is that v28 allow you to toy with log and cache without > >> loosing data. So I could redo the recreation of cache and log > >> relatively easy. > > > > You can still lose data when a log SSD dies (if they are not > > mirrored). > > I was more refering to the fact that under v28, one is able to remove > log and cache thru zpool commands without loosing data. Just pulling > the disks is of course going to corrupt data. If you can recreate the data and don't care about data loss, and if you verified that two ZIL devices give more performance than two, why not. Bye, Alexander. -- http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 22:12:28 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 61AAF106566B; Sat, 21 Jan 2012 22:12:28 +0000 (UTC) (envelope-from jhb@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 1C37B8FC19; Sat, 21 Jan 2012 22:12:28 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [96.47.65.170]) by cyrus.watson.org (Postfix) with ESMTPSA id 945E946B0C; Sat, 21 Jan 2012 17:12:27 -0500 (EST) Received: from John-Baldwins-MacBook-Air.local (c-68-36-150-83.hsd1.nj.comcast.net [68.36.150.83]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 01881B915; Sat, 21 Jan 2012 17:12:26 -0500 (EST) Message-ID: <4F1B384A.5070506@FreeBSD.org> Date: Sat, 21 Jan 2012 17:12:26 -0500 From: John Baldwin User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:9.0) Gecko/20111222 Thunderbird/9.0.1 MIME-Version: 1.0 To: Kostik Belousov References: <201201181707.21293.jhb@freebsd.org> <201201191026.09431.jhb@freebsd.org> <20120119160156.GF31224@deviant.kiev.zoral.com.ua> <201201191117.28128.jhb@freebsd.org> <20120121081257.GS31224@deviant.kiev.zoral.com.ua> In-Reply-To: <20120121081257.GS31224@deviant.kiev.zoral.com.ua> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Sat, 21 Jan 2012 17:12:27 -0500 (EST) Cc: Rick Macklem , fs@freebsd.org, Peter Wemm Subject: Re: Race in NFS lookup can result in stale namecache entries X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 22:12:28 -0000 On 1/21/12 3:12 AM, Kostik Belousov wrote: > On Thu, Jan 19, 2012 at 11:17:28AM -0500, John Baldwin wrote: >> On Thursday, January 19, 2012 11:01:56 am Kostik Belousov wrote: >>> On Thu, Jan 19, 2012 at 10:26:09AM -0500, John Baldwin wrote: >>>> On Thursday, January 19, 2012 9:06:13 am Kostik Belousov wrote: >>>>> On Wed, Jan 18, 2012 at 05:07:21PM -0500, John Baldwin wrote: >>>>> ... >>>>>> What I concluded is that it would really be far simpler and more >>>>>> obvious if the cached timestamps were stored in the namecache entry >>>>>> directly rather than having multiple name cache entries validated by >>>>>> shared state in the nfsnode. This does mean allowing the name cache >>>>>> to hold some filesystem-specific state. However, I felt this was much >>>>>> cleaner than adding a lot more complexity to nfs_lookup(). Also, this >>>>>> turns out to be fairly non-invasive to implement since nfs_lookup() >>>>>> calls cache_lookup() directly, but other filesystems only call it >>>>>> indirectly via vfs_cache_lookup(). I considered letting filesystems >>>>>> store a void * cookie in the name cache entry and having them provide >>>>>> a destructor, etc. However, that would require extra allocations for >>>>>> NFS lookups. Instead, I just adjusted the name cache API to >>>>>> explicitly allow the filesystem to store a single timestamp in a name >>>>>> cache entry by adding a new 'cache_enter_time()' that accepts a struct >>>>>> timespec that is copied into the entry. 'cache_enter_time()' also >>>>>> saves the current value of 'ticks' in the entry. 'cache_lookup()' is >>>>>> modified to add two new arguments used to return the timespec and >>>>>> ticks value used for a namecache entry when a hit in the cache occurs. >>>>>> >>>>>> One wrinkle with this is that the name cache does not create actual >>>>>> entries for ".", and thus it would not store any timestamps for those >>>>>> lookups. To fix this I changed the NFS client to explicitly fast-path >>>>>> lookups of "." by always returning the current directory as setup by >>>>>> cache_lookup() and never bothering to do a LOOKUP or check for stale >>>>>> attributes in that case. >>>>>> >>>>>> The current patch against 8 is at >>>>>> http://www.FreeBSD.org/~jhb/patches/nfs_lookup.patch >>>>> ... >>>>> >>>>> So now you add 8*2+4 bytes to each namecache entry on amd64 unconditionally. >>>>> Current size of the struct namecache invariant part on amd64 is 72 bytes, >>>>> so addition of 20 bytes looks slightly excessive. I am not sure about >>>>> typical distribution of the namecache nc_name length, so it is unobvious >>>>> does the change changes the memory usage significantly. >>>>> >>>>> A flag could be added to nc_flags to indicate the presence of timestamp. >>>>> The timestamps would be conditionally placed after nc_nlen, we probably >>>>> could use union to ease the access. Then, the direct dereferences of >>>>> nc_name would need to be converted to some inline function. >>>>> >>>>> I can do this after your patch is committed, if you consider the memory >>>>> usage saving worth it. >>>> >>>> Hmm, if the memory usage really is worrying then I could move to using the >>>> void * cookie method instead. >>> >>> I think the current approach is better then cookie that again will be >>> used only for NFS. With the cookie, you still has 8 bytes for each ncp. >>> With union, you do not have the overhead for !NFS. >>> >>> Default setup allows for ~300000 vnodes on not too powerful amd64 machine, >>> the ncsizefactor 2 together with 8 bytes for cookie is 4.5MB. For 20 bytes >>> per ncp, we get 12MB overhead. >> >> Ok. If you want to tackle the union bits I'm happy to let you do so. That >> will at least break up the changes a bit. > > Below is my take. First version of the patch added both small and large > zones with ts, but later I decided that large does not make sense. > If wanted, it can be restored easily. This looks good to me. I think you are fine with always using the _ts structure for the large case. -- John Baldwin From owner-freebsd-fs@FreeBSD.ORG Sat Jan 21 23:02:24 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1DEBB106566B for ; Sat, 21 Jan 2012 23:02:24 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id C2B408FC15 for ; Sat, 21 Jan 2012 23:02:23 +0000 (UTC) Received: by vbbey12 with SMTP id ey12so1795212vbb.13 for ; Sat, 21 Jan 2012 15:02:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=X60VeD+V0uZz+vcwLP0LrCJRTckxkzeFsH7g/HGylYA=; b=x6mHwoEQSPjQ66QSsMCJtRcTNvvhli/p6+eGsc99f6QOFUGohSucBLInw9c5FXz8nf CTVZOhQsIhB3GcSp/xBfAqTERb6SLgVV6lJcX03ObjOkJRp/juBIyzc5wlMj7NoMyUhs 3QJvrW0HUDhnL8TevzV8PId2D5kmxfMoZM7Ic= MIME-Version: 1.0 Received: by 10.52.24.70 with SMTP id s6mr1379260vdf.32.1327185383431; Sat, 21 Jan 2012 14:36:23 -0800 (PST) Received: by 10.220.117.11 with HTTP; Sat, 21 Jan 2012 14:36:23 -0800 (PST) Received: by 10.220.117.11 with HTTP; Sat, 21 Jan 2012 14:36:23 -0800 (PST) In-Reply-To: <20120121230616.00006267@unknown> References: <4F193D90.9020703@digiware.nl> <20120121162906.0000518c@unknown> <4F1B0177.8080909@digiware.nl> <20120121230616.00006267@unknown> Date: Sat, 21 Jan 2012 14:36:23 -0800 Message-ID: From: Freddie Cash To: Alexander Leidinger Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: fs@freebsd.org Subject: Re: Question about ZFS with log and cache on SSD with GPT X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Jan 2012 23:02:24 -0000 On Jan 21, 2012 2:06 PM, "Alexander Leidinger" wrote: > > On Sat, 21 Jan 2012 19:18:31 +0100 Willem Jan Withagen > wrote: > > Our are you more concerned out losing the the LOG en thus possible > > losing data. > > Yes. If one piece of the involved hardware dies, you lose data. To clarify this a bit. You will only lose data is: - data is written to ZIL device - entire system crashes before the data in the ZIL is written to disk - ZIL device is not available at pool import time If you write data to the ZIL, then the system crashes *but all data in ZIL is already written to the pool*, and the ZIL device is not available at pool import time, then no data is lost, and the pool import will continue without the separate ZIL. ZFS pools prior to ZFSv19 could not cope with a missing ZIL device at pool import time, so those pools were effectively lost. The only way to recover data was through some finicky zdb stuff. Thus, you had to use a mirrored log device to mitigate this risk. Since ZFSv19, though, a pool can be imported with a faulted ZIL device, and carry on. Only data that's only in the ZIL, not written to the pool, is lost.