From owner-freebsd-fs@FreeBSD.ORG Mon Feb 11 03:16:23 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 3AD41FF; Mon, 11 Feb 2013 03:16:23 +0000 (UTC) (envelope-from cross+freebsd@distal.com) Received: from mail.distal.com (mail.distal.com [IPv6:2001:470:e24c:200::ae25]) by mx1.freebsd.org (Postfix) with ESMTP id ED40D2B8; Mon, 11 Feb 2013 03:16:22 +0000 (UTC) Received: from magrathea.distal.com (magrathea.distal.com [206.138.151.12]) (authenticated bits=0) by mail.distal.com (8.14.3/8.14.3) with ESMTP id r1B3GEsr005577 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Sun, 10 Feb 2013 22:16:16 -0500 (EST) Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: Changes to kern.geom.debugflags? From: Chris Ross In-Reply-To: <315EDE17-4995-4819-BC82-E9B7D942E82A@distal.com> Date: Sun, 10 Feb 2013 22:16:14 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <51CB677E-83FF-43EF-A3CC-CF4ADBDB0C7B@distal.com> References: <7AA0B5D0-D49C-4D5A-8FA0-AA57C091C040@distal.com> <6A0C1005-F328-4C4C-BB83-CA463BD85127@distal.com> <20121225232507.GA47735@alchemy.franken.de> <8D01A854-97D9-4F1F-906A-7AB59BF8850B@distal.com> <6FC4189B-85FA-466F-AA00-C660E9C16367@distal.com> <20121230032403.GA29164@pix.net> <56B28B8A-2284-421D-A666-A21F995C7640@distal.com> <20130104234616.GA37999@alchemy.franken.de> <50F82846.6030104@FreeBSD.org> <315EDE17-4995-4819-BC82-E9B7D942E82A@distal.com> To: Andriy Gapon X-Mailer: Apple Mail (2.1499) X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.2 (mail.distal.com [206.138.151.250]); Sun, 10 Feb 2013 22:16:17 -0500 (EST) Cc: "freebsd-fs@freebsd.org" , Kurt Lidl , "freebsd-sparc64@freebsd.org" , Marius Strobl X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Feb 2013 03:16:23 -0000 On Jan 17, 2013, at 19:49 , Chris Ross wrote: > On Jan 17, 2013, at 11:35 , Andriy Gapon wrote: >> Chris, >>=20 >> thank you for triaging and analyzing this problem. And sorry for the = long delay >> (caused by the New Year craziness you mentioned earlier). >>=20 >> The problem is that arch_zfs_probe methods are expected only to probe = for ZFS >> disks/partitions, but they are not allowed to execute any other ZFS = operations. >> I assumed this to be true and forgot to check sparc64_zfs_probe. Mea = culpa. >>=20 >> Could you please test the following patch? >=20 > Thank you, Andriy. Much as you'd expect, that patch solves the = problem. I get some > of the printf()s that I'd put into zfs_fmtdev(), and the system loads = successfully. >=20 > Please commit that patch, and if you could, change the comment just = below the last > portion of it that is now not quite accurate (since you moved = mentioned code). >=20 > Thanks again! How long will this take to get to stable/9? Being new = to FreeBSD, > I'm not too familiar with the process of HEAD/stable/etc. (In NetBSD, = it would be a > commit followed by a pull request.) Sad to say that after hand-testing that patch, I waited for it to = appear on stable-9, (by manual inspection of the relevant code), and tried again. This = time, I get a slightly different failure: Rebooting with command: boot =20 Boot device: disk1 File and args:=20 =20 >> FreeBSD/sparc64 ZFS boot block Boot path: /pci@1c,600000/scsi@2/disk@1,0:a ERROR: Last Trap: Memory Address not Aligned {1} ok This is with a zfsloader built from stable-9 as of Feb 2. I'm = updating and rebuilding now, just to check, but I wanted to send out a note incase anyone else = on the sparc64 list has also seen this. - Chris From owner-freebsd-fs@FreeBSD.ORG Mon Feb 11 11:06:43 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id DE5C227A for ; Mon, 11 Feb 2013 11:06:43 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id D05961BC3 for ; Mon, 11 Feb 2013 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r1BB6hPN081248 for ; Mon, 11 Feb 2013 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r1BB6hUG081246 for freebsd-fs@FreeBSD.org; Mon, 11 Feb 2013 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 11 Feb 2013 11:06:43 GMT Message-Id: <201302111106.r1BB6hUG081246@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Feb 2013 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/174060 fs [ext2fs] Ext2FS system crashes (buffer overflow?) o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165923 fs [nfs] Writing to NFS-backed mmapped files fails if flu o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162362 fs [snapshots] [panic] ufs with snapshot(s) panics when g o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 296 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Feb 11 14:21:12 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id E134F1EC for ; Mon, 11 Feb 2013 14:21:12 +0000 (UTC) (envelope-from tomek.cedro@gmail.com) Received: from mail-qa0-f45.google.com (mail-qa0-f45.google.com [209.85.216.45]) by mx1.freebsd.org (Postfix) with ESMTP id A5748306 for ; Mon, 11 Feb 2013 14:21:12 +0000 (UTC) Received: by mail-qa0-f45.google.com with SMTP id g10so1186921qah.11 for ; Mon, 11 Feb 2013 06:21:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:sender:date:x-google-sender-auth:message-id :subject:from:to:content-type; bh=Glm8sY/mDhKpZTWUCiEYdhZNXQ6elmGkuzfK/TTmWxM=; b=sCaoiZZqaaizqtCdzWJ2TnQUoOAbNYslNftd56EDop70ri/SoibRxNZv0guaHPVFup yhfa1WNa7DLRLmrMhyBZEdaYg2gFVx5JfPdbWzZWU4sX8znhySrHWYzyQLpi2H32fUYn 3jKndtlWkigS1Pt8J6XKKAC23cWXkLekKQczmI5oIFxjyCQrdfp2Yv8SiSzg9jcZTmIz m0nVnM+EjjZK1u08BH/RCivvqHZesGT9l8+ht9mLoCJ5fM+flhZfjsQZH/8FwEZ24uM1 57gJKy0nbipIDUNKbebBYIzUoqkZiiEehnslO0pFS5hB7QXS47CPKgiLQnQadyZC7Eab AfLA== MIME-Version: 1.0 X-Received: by 10.224.186.81 with SMTP id cr17mr5465382qab.99.1360592466241; Mon, 11 Feb 2013 06:21:06 -0800 (PST) Sender: tomek.cedro@gmail.com Received: by 10.49.71.204 with HTTP; Mon, 11 Feb 2013 06:21:06 -0800 (PST) Date: Mon, 11 Feb 2013 15:21:06 +0100 X-Google-Sender-Auth: OqUeRuGKmZVn8R89ZB-qK9snOpw Message-ID: Subject: how much reliable is UFS2+SU/J From: CeDeROM To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Feb 2013 14:21:12 -0000 Hello :-) Some time ago I have switched to UFS2+SU/J. However on a crash I have found some issues on a /home partition that SU/Journal seems to have missed. This caused applications to misbehave or use default configuration. Running "fsck" showed that filesystem is clean, but running "fsck -fy" found some issues. This happended at least three times in a short period of time, so I started to wonder how much reliable if UFS2+SU/J? Should I expect some fixes in this area or thing will stay like this forever for UFS2+SU/J? Best regards :-) Tomek -- CeDeROM, SQ7MHZ, http://www.tomek.cedro.info From owner-freebsd-fs@FreeBSD.ORG Mon Feb 11 14:49:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 55220150; Mon, 11 Feb 2013 14:49:31 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from proxypop04.sare.net (proxypop04.sare.net [194.30.0.65]) by mx1.freebsd.org (Postfix) with ESMTP id 137436D4; Mon, 11 Feb 2013 14:49:30 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop04.sare.net (Postfix) with ESMTPSA id 864849DD49C; Mon, 11 Feb 2013 15:49:22 +0100 (CET) From: Borja Marcos Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Devilator 1.1 including ZFS stats Date: Mon, 11 Feb 2013 15:49:21 +0100 Message-Id: To: freebsd-performance@freebsd.org, FreeBSD Filesystems Mime-Version: 1.0 (Apple Message framework v1085) X-Mailer: Apple Mail (2.1085) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Feb 2013 14:49:31 -0000 Hello, Sorry for the crossposting, but I think this is also relevant to -fs. After many years gathering dust (although I've been using it internally) = I have updated devilator, the performance data collector for Orca. Apart from some cleanup and some bug fixes, I am including ZFS = monitoring. It's a bit crude now, but I will be happy to enhance it with = suggestions for interesting metrics. Orca (http://www.orcaware.com/orca/) is a well known package used mostly = on Solaris systems to graph system performance data, thanks to a data = collector called "orcallator". Devilator is the name of the data = collector for FreeBSD. Thanks to Jose M. Alcaide for suggesting the name = ;) An example of the kind of data Orca+Devilator can graph is available in = the following link: http://devilator.frobula.com/ And the devilator source code can be grabbed from: http://devilator.frobula.com/devilator-1.1.tar.gz Please send praise, rotten tomatoes or suggestions to: borjam@gmail.com Enjoy! From owner-freebsd-fs@FreeBSD.ORG Mon Feb 11 16:24:38 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 87491468 for ; Mon, 11 Feb 2013 16:24:38 +0000 (UTC) (envelope-from takeda@takeda.tk) Received: from chinatsu.takeda.tk (mail.takeda.tk [74.0.89.210]) by mx1.freebsd.org (Postfix) with ESMTP id 22ED0CE1 for ; Mon, 11 Feb 2013 16:24:37 +0000 (UTC) Received: from [10.186.227.129] (238.sub-70-197-64.myvzw.com [70.197.64.238]) (authenticated bits=0) by chinatsu.takeda.tk (8.14.5/8.14.5) with ESMTP id r1BGEwFU041417 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NO); Mon, 11 Feb 2013 08:14:59 -0800 (PST) (envelope-from takeda@takeda.tk) User-Agent: K-9 Mail for Android In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: Re: how much reliable is UFS2+SU/J From: Derek Kulinski Date: Mon, 11 Feb 2013 08:14:49 -0800 To: CeDeROM , freebsd-fs@freebsd.org Message-ID: <0affce66-f00b-4f16-9a57-4a3d71eedd99@email.android.com> X-Virus-Scanned: clamav-milter 0.97.6 at chinatsu.takeda.tk X-Virus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Feb 2013 16:24:38 -0000 CeDeROM wrote: >Hello :-) > >Some time ago I have switched to UFS2+SU/J. However on a crash I have >found some issues on a /home partition that SU/Journal seems to have >missed. This caused applications to misbehave or use default >configuration. Running "fsck" showed that filesystem is clean, but >running "fsck -fy" found some issues. This happended at least three >times in a short period of time, so I started to wonder how much >reliable if UFS2+SU/J? Should I expect some fixes in this area or >thing will stay like this forever for UFS2+SU/J? When I tried it first myself I found corrupted data even after clean shutdown. I turned off journaling. It was SSD, so fsck is very fast. Later I learned that SU/J is a bad idea on SSD. Not sure if that's why I got corrupted disk each time or because it writes to disk more. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. From owner-freebsd-fs@FreeBSD.ORG Mon Feb 11 16:41:53 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D82AA9D9; Mon, 11 Feb 2013 16:41:53 +0000 (UTC) (envelope-from cross@distal.com) Received: from mail.distal.com (mail.distal.com [IPv6:2001:470:e24c:200::ae25]) by mx1.freebsd.org (Postfix) with ESMTP id A3388DF3; Mon, 11 Feb 2013 16:41:53 +0000 (UTC) Received: from zalamar.mm-corp.net (static-66-16-13-46.dsl.cavtel.net [66.16.13.46]) (authenticated bits=0) by mail.distal.com (8.14.3/8.14.3) with ESMTP id r1BGffEP007379 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Mon, 11 Feb 2013 11:41:47 -0500 (EST) Subject: Re: Changes to kern.geom.debugflags? Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: text/plain; charset=us-ascii From: Chris Ross In-Reply-To: <51CB677E-83FF-43EF-A3CC-CF4ADBDB0C7B@distal.com> Date: Mon, 11 Feb 2013 11:41:41 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <7D91DCEC-38CD-45C9-BD21-C99F26A52197@distal.com> References: <7AA0B5D0-D49C-4D5A-8FA0-AA57C091C040@distal.com> <6A0C1005-F328-4C4C-BB83-CA463BD85127@distal.com> <20121225232507.GA47735@alchemy.franken.de> <8D01A854-97D9-4F1F-906A-7AB59BF8850B@distal.com> <6FC4189B-85FA-466F-AA00-C660E9C16367@distal.com> <20121230032403.GA29164@pix.net> <56B28B8A-2284-421D-A666-A21F995C7640@distal.com> <20130104234616.GA37999@alchemy.franken.de> <50F82846.6030104@FreeBSD.org> <315EDE17-4995-4819-BC82-E9B7D942E82A@distal.com> <51CB677E-83FF-43EF-A3CC-CF4ADBDB0C7B@distal.com> To: Andriy Gapon X-Mailer: Apple Mail (2.1283) X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.2 (mail.distal.com [206.138.151.250]); Mon, 11 Feb 2013 11:41:50 -0500 (EST) Cc: "freebsd-fs@freebsd.org" , Marius Strobl , Kurt Lidl , "freebsd-sparc64@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Feb 2013 16:41:53 -0000 On Feb 10, 2013, at 10:16 PM, Chris Ross wrote: >> Thanks again! How long will this take to get to stable/9? Being new = to FreeBSD, >> I'm not too familiar with the process of HEAD/stable/etc. (In = NetBSD, it would be a >> commit followed by a pull request.) >=20 > Sad to say that after hand-testing that patch, I waited for it to = appear on stable-9, > (by manual inspection of the relevant code), and tried again. This = time, I get a > slightly different failure: >=20 > Rebooting with command: boot =20= > Boot device: disk1 File and args:=20 >=20 >>> FreeBSD/sparc64 ZFS boot block > Boot path: /pci@1c,600000/scsi@2/disk@1,0:a > ERROR: Last Trap: Memory Address not Aligned >=20 > {1} ok >=20 > This is with a zfsloader built from stable-9 as of Feb 2. I'm = updating and rebuilding > now, just to check, but I wanted to send out a note incase anyone else = on the > sparc64 list has also seen this. I'm pleased to say that after rebuilding a stable/9 as of last night, = and installing it, it now successfully boots. So whether it was my human error, or = something that's been fixed in the last 2 weeks, it appears to be a non-problem. = Apologies for the noise. - Chris From owner-freebsd-fs@FreeBSD.ORG Tue Feb 12 00:19:42 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 4264BCFD for ; Tue, 12 Feb 2013 00:19:42 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from rush.bluerosetech.com (rush.bluerosetech.com [IPv6:2607:fc50:1000:9b00::25]) by mx1.freebsd.org (Postfix) with ESMTP id 1F7B58B1 for ; Tue, 12 Feb 2013 00:19:42 +0000 (UTC) Received: from vivi.cat.pdx.edu (vivi.cat.pdx.edu [131.252.214.6]) by rush.bluerosetech.com (Postfix) with ESMTPSA id EB0621141D for ; Mon, 11 Feb 2013 16:19:40 -0800 (PST) Received: from [127.0.0.1] (c-76-27-220-79.hsd1.wa.comcast.net [76.27.220.79]) by vivi.cat.pdx.edu (Postfix) with ESMTPSA id E986A24D76 for ; Mon, 11 Feb 2013 16:19:39 -0800 (PST) Message-ID: <51198A9C.4070406@bluerosetech.com> Date: Mon, 11 Feb 2013 16:19:40 -0800 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.10) Gecko/20121024 Thunderbird/10.0.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: When did ZFS support snapshotting during a scrub? Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Feb 2013 00:19:42 -0000 The other day I discovered that my 8.3-R systems (zpool v28, ZFS v4) will let me create and destroy snapshots when a scrub is running. This was not the case in $previous_version, but a quick scan of release notes doesn't mention it. I believe you couldn't snapshot during a scrub in 8.1-R, but I'm not sure. Does anyone know which ZFS/zpool version removed this restriction? From owner-freebsd-fs@FreeBSD.ORG Tue Feb 12 00:24:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 49E66DB6 for ; Tue, 12 Feb 2013 00:24:48 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qa0-f53.google.com (mail-qa0-f53.google.com [209.85.216.53]) by mx1.freebsd.org (Postfix) with ESMTP id D8E558DA for ; Tue, 12 Feb 2013 00:24:47 +0000 (UTC) Received: by mail-qa0-f53.google.com with SMTP id z4so1421812qan.12 for ; Mon, 11 Feb 2013 16:24:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=MuO6Q1nWz49160M2F3VvxFQaWR9agu3tM89+3p2tn/s=; b=RYYApgRFLLTcCu4FZJ4xE+I2IlZHyknnlWMvXzkpf5sSppSbkn8LRFGX1hc11V1ItO uJXMgWHEcGj7WkxtVw6R/0vN34NfIClx4v4pA+Y8G4QLQg1Uto3tWfn6GslFdEGWpT/K GsF4s+VsfaFk6ShuDVsF9NCV603TnO3LnPXF+5cSI138JRirj2bJozL+UPaIUOR5k0lp FbTE8bP89pTIdHDLmuw7wu+k+9wwwN5KrhcuERBVK0glJR7PcIpk4m0cCn+0DwKA1OB5 3XnvFkFlW/Twb+DokuqI5cAexK/1wbCd6bQd4MvhW1xciiG3MuAk3U/Xg+8YdKBrVcHs Fazg== MIME-Version: 1.0 X-Received: by 10.224.182.70 with SMTP id cb6mr6176973qab.80.1360628681322; Mon, 11 Feb 2013 16:24:41 -0800 (PST) Received: by 10.49.106.233 with HTTP; Mon, 11 Feb 2013 16:24:41 -0800 (PST) Received: by 10.49.106.233 with HTTP; Mon, 11 Feb 2013 16:24:41 -0800 (PST) In-Reply-To: <51198A9C.4070406@bluerosetech.com> References: <51198A9C.4070406@bluerosetech.com> Date: Mon, 11 Feb 2013 16:24:41 -0800 Message-ID: Subject: Re: When did ZFS support snapshotting during a scrub? From: Freddie Cash To: Darren Pilgrim Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Feb 2013 00:24:48 -0000 That was only an issue with ZFSv6 from 7.x. ZFSv19 and above allow snapshots during scrubs. On 2013-02-11 4:19 PM, "Darren Pilgrim" wrote: > The other day I discovered that my 8.3-R systems (zpool v28, ZFS v4) will > let me create and destroy snapshots when a scrub is running. This was not > the case in $previous_version, but a quick scan of release notes doesn't > mention it. I believe you couldn't snapshot during a scrub in 8.1-R, but > I'm not sure. Does anyone know which ZFS/zpool version removed this > restriction? > ______________________________**_________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/**mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@**freebsd.org > " > From owner-freebsd-fs@FreeBSD.ORG Tue Feb 12 00:29:15 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id AFAF8146 for ; Tue, 12 Feb 2013 00:29:15 +0000 (UTC) (envelope-from delphij@gmail.com) Received: from mail-qa0-f43.google.com (mail-qa0-f43.google.com [209.85.216.43]) by mx1.freebsd.org (Postfix) with ESMTP id 77F86925 for ; Tue, 12 Feb 2013 00:29:15 +0000 (UTC) Received: by mail-qa0-f43.google.com with SMTP id dx4so1431366qab.9 for ; Mon, 11 Feb 2013 16:29:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=uMjKgXYfJ+56wZX3cZc2KQcmi+lGC7yNEi87FqQBmYI=; b=kfcf1mdupSFVHQQH9z/9c8ChLf1H5IGlbe9b1RHAuuqET95BAirtA+RwP0EgCdg2yD CurwiH7QfKoJz9fFf/dQYA5I/69nkvgKa1Dwf5K6M+oUp/YHPiENgODUbFomHjcLptxu i7BqpNUUG2dAvul3gdF5cMEgCZMyXvyTQGCylvtxT6uKTjEwCHRZYu+VqluPXkU6PxPs MSrkgGw1P13sgu/+Mffe/aRcn6jgxTsEHKKAo3OfQyeR9ki27BvB2tY6VEWlp9L34Pyk B06CDaF2HfO/+Gft4AnDgwr9fCWIj56PIpcglg2iaIaKj9z7NAvDJCnDRCp0jld/YK0H G6Tw== MIME-Version: 1.0 X-Received: by 10.224.9.77 with SMTP id k13mr6521321qak.4.1360628954534; Mon, 11 Feb 2013 16:29:14 -0800 (PST) Received: by 10.49.12.162 with HTTP; Mon, 11 Feb 2013 16:29:14 -0800 (PST) In-Reply-To: <51198A9C.4070406@bluerosetech.com> References: <51198A9C.4070406@bluerosetech.com> Date: Mon, 11 Feb 2013 16:29:14 -0800 Message-ID: Subject: Re: When did ZFS support snapshotting during a scrub? From: Xin LI To: Darren Pilgrim Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Feb 2013 00:29:15 -0000 It was onnv-gate 7046:361307ae060d, where ZFS was bumped to pool version 11. On Mon, Feb 11, 2013 at 4:19 PM, Darren Pilgrim wrote: > The other day I discovered that my 8.3-R systems (zpool v28, ZFS v4) will > let me create and destroy snapshots when a scrub is running. This was not > the case in $previous_version, but a quick scan of release notes doesn't > mention it. I believe you couldn't snapshot during a scrub in 8.1-R, but > I'm not sure. Does anyone know which ZFS/zpool version removed this > restriction? > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die From owner-freebsd-fs@FreeBSD.ORG Tue Feb 12 00:38:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id DBB74236 for ; Tue, 12 Feb 2013 00:38:54 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from yoshi.bluerosetech.com (yoshi.bluerosetech.com [IPv6:2607:f2f8:a450::66]) by mx1.freebsd.org (Postfix) with ESMTP id BAB3596F for ; Tue, 12 Feb 2013 00:38:54 +0000 (UTC) Received: from vivi.cat.pdx.edu (vivi.cat.pdx.edu [131.252.214.6]) by yoshi.bluerosetech.com (Postfix) with ESMTPSA id 62FF3E603B; Mon, 11 Feb 2013 16:38:54 -0800 (PST) Received: from [127.0.0.1] (c-76-27-220-79.hsd1.wa.comcast.net [76.27.220.79]) by vivi.cat.pdx.edu (Postfix) with ESMTPSA id B7C7124D76; Mon, 11 Feb 2013 16:38:53 -0800 (PST) Message-ID: <51198F1E.9030008@bluerosetech.com> Date: Mon, 11 Feb 2013 16:38:54 -0800 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.10) Gecko/20121024 Thunderbird/10.0.10 MIME-Version: 1.0 To: Xin LI Subject: Re: When did ZFS support snapshotting during a scrub? References: <51198A9C.4070406@bluerosetech.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Feb 2013 00:38:54 -0000 On 2013-02-11 16:29, Xin LI wrote: > It was onnv-gate 7046:361307ae060d, where ZFS was bumped to pool version 11. Did FreeBSD releases 8.0 and 7.3, which bumped the pool version to 13, include this fix? From owner-freebsd-fs@FreeBSD.ORG Tue Feb 12 19:49:12 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6E73A2CA for ; Tue, 12 Feb 2013 19:49:12 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-ee0-f44.google.com (mail-ee0-f44.google.com [74.125.83.44]) by mx1.freebsd.org (Postfix) with ESMTP id 05525DBF for ; Tue, 12 Feb 2013 19:49:11 +0000 (UTC) Received: by mail-ee0-f44.google.com with SMTP id l10so240714eei.3 for ; Tue, 12 Feb 2013 11:49:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:date:from:to:subject:message-id:mime-version :content-type:content-disposition:user-agent; bh=5xqn3z+fLrvwSqyHRPYZn+9Q3DltS2BljivUIk+dNG4=; b=CrG7e2CBi2OvLRnJyMJCUiMFRzWqAhBhQ4m48RrldEFrv9i4uULNmLL2gcQbp+zowv tXuLND/zbgP1mHI5zzeaaHtx9vh2dcy8iuKOkzjrNaVR3dBfCH4BDTS7xORBHkFMU7al wJ+TDvPutVGEjE3zTrGQYA9l0kdlnYY1R/v3cOUTvHhs0rfoTbg1SbYzx5ezOQoa4yuL NiqasNmhxJbtXC7NIDJNLh5XObMS/dygy6UQryg3T3O2Pu0wGidMsSilM/v7hTSNXEcw Ms4NU7RSeX3SKhCHwVsLwHfiajjdR+fkBBZluhAgwrYXjUdzKWkGWAUEfNCOx3OwsoN/ Ndfg== X-Received: by 10.14.173.69 with SMTP id u45mr66019955eel.21.1360698051008; Tue, 12 Feb 2013 11:40:51 -0800 (PST) Received: from ithaqua.etoilebsd.net (ithaqua.etoilebsd.net. [37.59.37.188]) by mx.google.com with ESMTPS id q5sm69518744eeo.17.2013.02.12.11.40.49 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 12 Feb 2013 11:40:49 -0800 (PST) Sender: Baptiste Daroussin Date: Tue, 12 Feb 2013 20:40:47 +0100 From: Baptiste Daroussin To: fs@FreeBSD.org Subject: Marking some FS as jailable Message-ID: <20130212194047.GE12760@ithaqua.etoilebsd.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="tMbDGjvJuJijemkf" Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Feb 2013 19:49:12 -0000 --tMbDGjvJuJijemkf Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi, I would like to mark some filesystem as jailable, here is the one I need: linprocfs, tmpfs and fdescfs, I was planning to do it with adding a allow.mount.${fs} for each one. Anyone has an objection? regards, Bapt --tMbDGjvJuJijemkf Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEamr8ACgkQ8kTtMUmk6EwWfQCcCJrW4IokW5LuQt4sst6hKqi3 tA0An2n2zILlMkUI21Tj4RAJ7Zyc2NMQ =DqiR -----END PGP SIGNATURE----- --tMbDGjvJuJijemkf-- From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 05:06:33 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id C67CB821; Wed, 13 Feb 2013 05:06:33 +0000 (UTC) (envelope-from jamie@FreeBSD.org) Received: from m2.gritton.org (gritton.org [199.192.164.235]) by mx1.freebsd.org (Postfix) with ESMTP id 8CEE0AA2; Wed, 13 Feb 2013 05:06:32 +0000 (UTC) Received: from glorfindel.gritton.org (c-174-52-130-157.hsd1.ut.comcast.net [174.52.130.157]) (authenticated bits=0) by m2.gritton.org (8.14.5/8.14.5) with ESMTP id r1D56Vgf070145; Tue, 12 Feb 2013 22:06:31 -0700 (MST) (envelope-from jamie@FreeBSD.org) Message-ID: <511B1F55.3080500@FreeBSD.org> Date: Tue, 12 Feb 2013 22:06:29 -0700 From: Jamie Gritton User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.24) Gecko/20120129 Thunderbird/3.1.16 MIME-Version: 1.0 To: Baptiste Daroussin Subject: Re: Marking some FS as jailable References: <20130212194047.GE12760@ithaqua.etoilebsd.net> In-Reply-To: <20130212194047.GE12760@ithaqua.etoilebsd.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 05:06:33 -0000 On 02/12/13 12:40, Baptiste Daroussin wrote: > Hi, > > I would like to mark some filesystem as jailable, here is the one I need: > linprocfs, tmpfs and fdescfs, I was planning to do it with adding a > allow.mount.${fs} for each one. > > Anyone has an objection? > > regards, > Bapt Would it make sense for linprocfs to use the existing allow.mount.procfs flag? - Jamie From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 09:03:25 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D123DF5B; Wed, 13 Feb 2013 09:03:25 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wi0-f171.google.com (mail-wi0-f171.google.com [209.85.212.171]) by mx1.freebsd.org (Postfix) with ESMTP id 2E7B03C4; Wed, 13 Feb 2013 09:03:24 +0000 (UTC) Received: by mail-wi0-f171.google.com with SMTP id hn17so5397520wib.4 for ; Wed, 13 Feb 2013 01:03:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=c1PZB+5icM4J5djo0iZmj/jDtSxpsk3iM2Or0obIS2w=; b=VSnf3tS+ebuwCpwVGrIsFzySl1a/WS5KmWbkQCy2MZ3rKv06a9TvwMI3imsTox5uuZ IzFUrlGx1Aed/mEgkix0SNOCMU27HjoIl1DkzNyE/mxHnLYh59OgGXWttlZDIczigThi /T9g6xJKMLfeemdQgBzmLtEJmL0bZHzsKYkm7fWAD234dThGqNlULW3vEXjJfpg7Cd0y I4z1TYgUvBLJ4iVfWX47H6tR54AHSEgz7/z+V99Wh9DbjUOfuk3rt6TNz+oQ6Nu0n114 IK0k4i1gIUXV5hEeD7IFO9Cex6hXmBToIhpSiL4zJTOyDlo8UX3ZHxcEGUCL/3w2AII0 6J0Q== X-Received: by 10.194.87.100 with SMTP id w4mr1936882wjz.48.1360746204075; Wed, 13 Feb 2013 01:03:24 -0800 (PST) Received: from ithaqua.etoilebsd.net (ithaqua.etoilebsd.net. [37.59.37.188]) by mx.google.com with ESMTPS id s8sm41252143wif.9.2013.02.13.01.03.22 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 13 Feb 2013 01:03:22 -0800 (PST) Sender: Baptiste Daroussin Date: Wed, 13 Feb 2013 10:03:20 +0100 From: Baptiste Daroussin To: Jamie Gritton Subject: Re: Marking some FS as jailable Message-ID: <20130213090320.GC44004@ithaqua.etoilebsd.net> References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="tqI+Z3u+9OQ7kwn0" Content-Disposition: inline In-Reply-To: <511B1F55.3080500@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 09:03:25 -0000 --tqI+Z3u+9OQ7kwn0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: > On 02/12/13 12:40, Baptiste Daroussin wrote: > > Hi, > > > > I would like to mark some filesystem as jailable, here is the one I nee= d: > > linprocfs, tmpfs and fdescfs, I was planning to do it with adding a > > allow.mount.${fs} for each one. > > > > Anyone has an objection? > > > > regards, > > Bapt >=20 > Would it make sense for linprocfs to use the existing allow.mount.procfs > flag? ok I'll have a look at how it is done, I will also use the same for linsysf= s as they are quite close. regards, Bapt --tqI+Z3u+9OQ7kwn0 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEbVtgACgkQ8kTtMUmk6EwzzACgr+Jhc7WX6iX+Qb3Mz/KgqsUM cKoAn2wPZ3qy6FPb9G/rgaPOWdEd8wqo =C7o2 -----END PGP SIGNATURE----- --tqI+Z3u+9OQ7kwn0-- From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 16:16:26 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 962E0852; Wed, 13 Feb 2013 16:16:26 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 5C34DD9E; Wed, 13 Feb 2013 16:16:26 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r1DGGQAJ088935; Wed, 13 Feb 2013 16:16:26 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r1DGGQSQ088931; Wed, 13 Feb 2013 16:16:26 GMT (envelope-from linimon) Date: Wed, 13 Feb 2013 16:16:26 GMT Message-Id: <201302131616.r1DGGQSQ088931@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/175950: [zfs] Possible deadlock in zfs after long uptime X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 16:16:26 -0000 Old Synopsis: Possible deadlock in zfs after long uptime New Synopsis: [zfs] Possible deadlock in zfs after long uptime Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Feb 13 16:16:11 UTC 2013 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=175950 From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 16:18:55 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 5F717ADA; Wed, 13 Feb 2013 16:18:55 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 37B4ADF3; Wed, 13 Feb 2013 16:18:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r1DGItb4089077; Wed, 13 Feb 2013 16:18:55 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r1DGItnN089073; Wed, 13 Feb 2013 16:18:55 GMT (envelope-from linimon) Date: Wed, 13 Feb 2013 16:18:55 GMT Message-Id: <201302131618.r1DGItnN089073@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/175897: [zfs] operations on readonly zpool hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 16:18:55 -0000 Old Synopsis: operations on readonly zpool hang New Synopsis: [zfs] operations on readonly zpool hang Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Feb 13 16:18:44 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=175897 From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 16:37:02 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 78162153 for ; Wed, 13 Feb 2013 16:37:02 +0000 (UTC) (envelope-from tjg@ucsc.edu) Received: from mail-ia0-x233.google.com (ia-in-x0233.1e100.net [IPv6:2607:f8b0:4001:c02::233]) by mx1.freebsd.org (Postfix) with ESMTP id 39E1BEBD for ; Wed, 13 Feb 2013 16:37:02 +0000 (UTC) Received: by mail-ia0-f179.google.com with SMTP id x24so1381775iak.10 for ; Wed, 13 Feb 2013 08:37:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=QI4psaYZAHq8i1X/7EjIttcQ7Nnsm+/X08YbhYP+V0M=; b=DsAAn7d3bwhRyqS/JUAeowFrp8zmwjFZEOxIW3RJ1CRdl/NWrKbJImtaCUYyucN6Sf 2EN6ny36n0aa/mPyvzxioxnvjhPyFZbXM1ORc1FtxJcn3lwc2eaauMhtJ1NLrk7G0DVf vKeDL2B+pKPgJzUsAm7qFl7Nd+k32p/QPmKWw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=QI4psaYZAHq8i1X/7EjIttcQ7Nnsm+/X08YbhYP+V0M=; b=Kqhwe0cgAvXzRUiJOceYxJasbmNlLHFDYtkXeCKvdafHY0KKV645RQe9+CTM2C8ahb bLzKj/vP4F7sfoLwqmvg8KMBGe2f/BHzULws0mswREbcbOnUL5wpl5/64vf6I996Rd2f k+DsDsMa/jA7yBkSzqx9usdEGdEGKl7xJtuCT4Xii+K1J6NaIok/CCsnr8JH9htQ/Ndz rhUKbkiv3PE492fTnD47Ej0tb9pit0IbhImm3ppj/ZAhskm2v0m/4UTqUSTJl5y31gyV 8bbfw36xc9lE9rBEGBvq9T79lATK6+3fVFOuDy6jLPivwTPxLZwZNMQEWHVpBDeShwEl /S+w== MIME-Version: 1.0 X-Received: by 10.50.45.197 with SMTP id p5mr12138490igm.41.1360773421394; Wed, 13 Feb 2013 08:37:01 -0800 (PST) Received: by 10.42.18.71 with HTTP; Wed, 13 Feb 2013 08:37:01 -0800 (PST) Date: Wed, 13 Feb 2013 08:37:01 -0800 Message-ID: Subject: FreeBSD 9.1 ZFS-Related Crash From: Tim Gustafson To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQkBVrncyEiekPr4+xI7FOh+E6alZtYyHDxoUIoyrql7Hh7pdWHX0zhNue+SOigk0N3sFXuk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 16:37:02 -0000 Hi, I'm not sure how to report this as an official bug, or if it even qualifies as one. We're running ZFS on a FreeBSD 9.1 machine. Our zpool is version 28 and our zfs file systems are version 5. Some of our users still have individual ZFS file systems for their home directories. One in particular also has a bunch of cron jobs that run every few minutes on different systems that access his home directory via NFSv3. In order to take his home directory off-line, I removed him from /etc/exports and re-started mountd. When I attempted to change the mount point of his ZFS file system, the server crashed. When the server came back up and I attempted to change the mount point a second time, the server crashed again. I've done this hundreds of times for other users, and have had maybe 5 or 6 other crashes on other people's home directories as well. So, it's a somewhat reproducible bug, but it's not very consistent. My best guess is that if you attempt to un-mount a ZFS file system when there is still some sort of NFS activity, it causes the system to crash. But, most of the time it seems that if you attempt to unmount a file system that is still in use, the unmount command returns an error stating that the file system is still in use. So there's some certain type of NFS activity that seems to cause the issue - whether it's perhaps trying to open a file right as the file system is being unmounted, or maybe having a file locked but not open...I just don't know which condition is the culprit. I have some spare hardware that I could use to attempt to reproduce this crash, but I wanted to ping the group first to see if anyone else has seen this issue. Is this a known issue? Has anyone else seen this sort of behavior? -- Tim Gustafson tjg@ucsc.edu 831-459-5354 Baskin Engineering, Room 313A From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 16:54:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 2530C852 for ; Wed, 13 Feb 2013 16:54:48 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from cpsmtpb-ews09.kpnxchange.com (cpsmtpb-ews09.kpnxchange.com [213.75.39.14]) by mx1.freebsd.org (Postfix) with ESMTP id 8B7721000 for ; Wed, 13 Feb 2013 16:54:46 +0000 (UTC) Received: from cpsps-ews13.kpnxchange.com ([10.94.84.180]) by cpsmtpb-ews09.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 13 Feb 2013 17:53:21 +0100 Received: from CPSMTPM-TLF102.kpnxchange.com ([195.121.3.5]) by cpsps-ews13.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 13 Feb 2013 17:53:22 +0100 Received: from sjakie.klop.ws ([212.182.167.131]) by CPSMTPM-TLF102.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 13 Feb 2013 17:54:39 +0100 Received: from 212-182-167-131.ip.telfort.nl (localhost [127.0.0.1]) by sjakie.klop.ws (Postfix) with ESMTP id A6A6E46C8 for ; Wed, 13 Feb 2013 17:54:39 +0100 (CET) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: FreeBSD 9.1 ZFS-Related Crash References: Date: Wed, 13 Feb 2013 17:54:38 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.14 (FreeBSD) X-OriginalArrivalTime: 13 Feb 2013 16:54:39.0682 (UTC) FILETIME=[D00D8220:01CE0A0A] X-RcptDomain: freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 16:54:48 -0000 On Wed, 13 Feb 2013 17:37:01 +0100, Tim Gustafson wrote: > Hi, > > I'm not sure how to report this as an official bug, or if it even > qualifies as one. > > We're running ZFS on a FreeBSD 9.1 machine. Our zpool is version 28 > and our zfs file systems are version 5. > > Some of our users still have individual ZFS file systems for their > home directories. One in particular also has a bunch of cron jobs > that run every few minutes on different systems that access his home > directory via NFSv3. In order to take his home directory off-line, I > removed him from /etc/exports and re-started mountd. When I attempted > to change the mount point of his ZFS file system, the server crashed. > When the server came back up and I attempted to change the mount point > a second time, the server crashed again. > > I've done this hundreds of times for other users, and have had maybe 5 > or 6 other crashes on other people's home directories as well. So, > it's a somewhat reproducible bug, but it's not very consistent. > > My best guess is that if you attempt to un-mount a ZFS file system > when there is still some sort of NFS activity, it causes the system to > crash. But, most of the time it seems that if you attempt to unmount > a file system that is still in use, the unmount command returns an > error stating that the file system is still in use. So there's some > certain type of NFS activity that seems to cause the issue - whether > it's perhaps trying to open a file right as the file system is being > unmounted, or maybe having a file locked but not open...I just don't > know which condition is the culprit. > > I have some spare hardware that I could use to attempt to reproduce > this crash, but I wanted to ping the group first to see if anyone else > has seen this issue. Is this a known issue? Has anyone else seen > this sort of behavior? > What do you mean by crash? Does it hang, does it reboot, does it panic? If it panics, did you get a dump of the kernel after the panic? Ronald. From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 16:57:43 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6BB6CAB5 for ; Wed, 13 Feb 2013 16:57:43 +0000 (UTC) (envelope-from tjg@ucsc.edu) Received: from mail-ia0-x22a.google.com (mail-ia0-x22a.google.com [IPv6:2607:f8b0:4001:c02::22a]) by mx1.freebsd.org (Postfix) with ESMTP id 3A90E16F for ; Wed, 13 Feb 2013 16:57:43 +0000 (UTC) Received: by mail-ia0-f170.google.com with SMTP id k20so1416181iak.29 for ; Wed, 13 Feb 2013 08:57:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=LumK7YlbOf7OWqnYWxRRb9DyJHHYQeObQQJ7LBu3nZI=; b=R1wh1ZVx0M8aSZDAqdlp8ToI+Neu63cX6ZRcEpnaRiq5J0EDxHINU+uUAQf3cdp+iX HcsW0XWr6mMarGRl3haqRnzQWAv80RXaZkPU8pxZaE63mPDixLO+nnytAA6Wk3b3sGLa mlacefw78y799/Q+aR00ToTCTBqCGJG7i57k8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type:x-gm-message-state; bh=LumK7YlbOf7OWqnYWxRRb9DyJHHYQeObQQJ7LBu3nZI=; b=KAgNShbdt3dTca+4XJKR1tIgkKbEK8Pg3fXPYAwcZrkNPwEPuj/vqCTpfd6/IPm+TJ NEj3eptWtOhk2pt1Er0UjocRYyTsLT6WH8LPZ86thpfui9iKYml7MGMT00un8yT31uu4 kROVje8FXVynRJOs5Qlns7YcomC5eJ9OBB1d2m9h+qzu3cx2fI1cetFmq5LGiav3Vyxq gelhu7+ZJFwmDztc8qCrpyh/Wr+ndx1KLs6yLBDN8ylEmFMjjIOqhuBSjibEwLJ6Ce2m q2eMSoPQVxuNrv9EzkjsMkvzkJdN8yLXevOdGU56YG1IaRPIiXA/ScuMTzfLzAnt7/cL jjbQ== MIME-Version: 1.0 X-Received: by 10.50.45.197 with SMTP id p5mr12281775igm.41.1360774662503; Wed, 13 Feb 2013 08:57:42 -0800 (PST) Received: by 10.42.18.71 with HTTP; Wed, 13 Feb 2013 08:57:42 -0800 (PST) In-Reply-To: References: Date: Wed, 13 Feb 2013 08:57:42 -0800 Message-ID: Subject: Re: FreeBSD 9.1 ZFS-Related Crash From: Tim Gustafson To: Ronald Klop Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQkC/WZSlR9INIWacyyrefunSDdTG3UDMrm0z7ESvBwYPDyhv7dcYkM4b6HF9qGKDKua3Xor Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 16:57:43 -0000 > What do you mean by crash? Does it hang, does it reboot, does it panic? > If it panics, did you get a dump of the kernel after the panic? It reboots. One the console, I get some bright white text that goes away too fast to read, and then the system reboots. It's definitely an unclean shutdown because the boot partition needed to fsck when it came back up. -- Tim Gustafson tjg@ucsc.edu 831-459-5354 Baskin Engineering, Room 313A From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 18:07:53 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 03F32B09 for ; Wed, 13 Feb 2013 18:07:53 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from cpsmtpb-ews10.kpnxchange.com (cpsmtpb-ews10.kpnxchange.com [213.75.39.15]) by mx1.freebsd.org (Postfix) with ESMTP id 90875692 for ; Wed, 13 Feb 2013 18:07:52 +0000 (UTC) Received: from cpsps-ews29.kpnxchange.com ([10.94.84.195]) by cpsmtpb-ews10.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 13 Feb 2013 19:06:26 +0100 Received: from CPSMTPM-TLF103.kpnxchange.com ([195.121.3.6]) by cpsps-ews29.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 13 Feb 2013 19:06:27 +0100 Received: from sjakie.klop.ws ([212.182.167.131]) by CPSMTPM-TLF103.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); Wed, 13 Feb 2013 19:07:44 +0100 Received: from 212-182-167-131.ip.telfort.nl (localhost [127.0.0.1]) by sjakie.klop.ws (Postfix) with ESMTP id 9E75A4784 for ; Wed, 13 Feb 2013 19:07:44 +0100 (CET) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: FreeBSD 9.1 ZFS-Related Crash References: Date: Wed, 13 Feb 2013 19:07:44 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.14 (FreeBSD) X-OriginalArrivalTime: 13 Feb 2013 18:07:44.0984 (UTC) FILETIME=[05E57D80:01CE0A15] X-RcptDomain: freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 18:07:53 -0000 On Wed, 13 Feb 2013 17:57:42 +0100, Tim Gustafson wrote: >> What do you mean by crash? Does it hang, does it reboot, does it panic? >> If it panics, did you get a dump of the kernel after the panic? > > It reboots. One the console, I get some bright white text that goes > away too fast to read, and then the system reboots. It's definitely > an unclean shutdown because the boot partition needed to fsck when it > came back up. > Do you have something like serial console so you can capture the output? Ronald. From owner-freebsd-fs@FreeBSD.ORG Wed Feb 13 19:09:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id BDF2FC18 for ; Wed, 13 Feb 2013 19:09:26 +0000 (UTC) (envelope-from tjg@ucsc.edu) Received: from mail-ie0-x236.google.com (mail-ie0-x236.google.com [IPv6:2607:f8b0:4001:c03::236]) by mx1.freebsd.org (Postfix) with ESMTP id 8E0089E5 for ; Wed, 13 Feb 2013 19:09:26 +0000 (UTC) Received: by mail-ie0-f182.google.com with SMTP id k14so2163731iea.13 for ; Wed, 13 Feb 2013 11:09:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsc.edu; s=ucsc-google; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=1TwIU63ttY+j23TTh2z/ZArkbMJZSZSmrem+ZQ8H59E=; b=NjJRdq/S2ylWQRcTVmkYWaIOb73Sc+8SL3wS5xmdFVGgk8pLU7Bs78afyhwx8R/NNj KdO4wjJnPMq1h5qWwwWIZdxQS8vNuwLPYmnQlHLqI+zbtaGc/mxi38ScUUK8V2YWKfRM JNFUczAfDkbHO8DMwrkVb0gXGAi4xEZ9nRkKc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type:x-gm-message-state; bh=1TwIU63ttY+j23TTh2z/ZArkbMJZSZSmrem+ZQ8H59E=; b=LG5KNlTwKlJNMWqT6A34hH1jJXO4RgGwvYqcA+XtgdVaQvT710C8Oc57n2ie7ugF5h LJY4vdJnFqARUxmKcG4R9aRLI++xLaEUCnZIZfhlSVq1VvYL8emxd0l0LDqPCpuqMw+O UDbOiP5hr0HhkNQVjkDYsc40MOgodMXmgziu35Z0Lal5zsKhDe3wvjS2iJIkGuGYqROA 6/ZbdUGm3oWR5yM0R1SyeyPN5TeXqKKyecxxk8B6vKDjvSHMggnYzCWjBTj9FBcV6krN zmVPiVDMyW6o1z+sfEEcynVBAVNV9BEutcLd9zNE2hV+9m1dxx6VmcGrohS4/oihfJHh 8thw== MIME-Version: 1.0 X-Received: by 10.50.46.197 with SMTP id x5mr13198578igm.7.1360782565995; Wed, 13 Feb 2013 11:09:25 -0800 (PST) Received: by 10.42.18.71 with HTTP; Wed, 13 Feb 2013 11:09:25 -0800 (PST) In-Reply-To: References: Date: Wed, 13 Feb 2013 11:09:25 -0800 Message-ID: Subject: Re: FreeBSD 9.1 ZFS-Related Crash From: Tim Gustafson To: Ronald Klop Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQkePx7eBh1eSEkQSUvoG0iVBpE9xL+yAa129ZibpgR1Ibrc6MVTTQzqqVXl1UXU7qFDqhux Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2013 19:09:26 -0000 > Do you have something like serial console so you can capture the output? We do not, unfortunately. I'm not even sure this box has any serial ports. The box does have an IPKVM management interface that we use regularly, but I don't think that has any recording capability. I turned on crash dumps by adding this to my /etc/rc.conf: dumpdev="auto" Will crash dumps be helpful here? -- Tim Gustafson tjg@ucsc.edu 831-459-5354 Baskin Engineering, Room 313A From owner-freebsd-fs@FreeBSD.ORG Thu Feb 14 11:16:01 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D158EC49; Thu, 14 Feb 2013 11:16:01 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id A8CE4994; Thu, 14 Feb 2013 11:16:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r1EBG1tU007904; Thu, 14 Feb 2013 11:16:01 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r1EBG1nw007900; Thu, 14 Feb 2013 11:16:01 GMT (envelope-from linimon) Date: Thu, 14 Feb 2013 11:16:01 GMT Message-Id: <201302141116.r1EBG1nw007900@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/176141: [zfs] sharesmb=on makes errors for sharenfs, and still sets the option X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Feb 2013 11:16:01 -0000 Synopsis: [zfs] sharesmb=on makes errors for sharenfs, and still sets the option Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Feb 14 11:15:44 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=176141 From owner-freebsd-fs@FreeBSD.ORG Thu Feb 14 13:27:20 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9460DE62; Thu, 14 Feb 2013 13:27:20 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wi0-f181.google.com (mail-wi0-f181.google.com [209.85.212.181]) by mx1.freebsd.org (Postfix) with ESMTP id EEE84217; Thu, 14 Feb 2013 13:27:19 +0000 (UTC) Received: by mail-wi0-f181.google.com with SMTP id hm6so2721479wib.14 for ; Thu, 14 Feb 2013 05:27:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=lh8GNdyAY5hLHvwBlq9gycue7GHnQDw7Un3DepTVn6U=; b=EJrwQidvC2Qf7hOovkJC1/lRqaH7009LrgyiFoHBQA0GycKKfQrsvXdrzJ/qgD+3sq Y7qAqOtYuEuMul0hzRA/8FcEltjrSe5hSA9qVTw/ww01W14/IFOHrpEBkWE/ptfPb8Ok XJVIHo33LIhJW8eaJMgoMpiZKpRJYSPW/kSwddo0v+CY9KzzwTSVZSHnQlJ+fcH29tG8 MoLJKVVgxZb6Y55c5mTXuPVNiKBQkYaw8eKFZ6M4oJyfEFVQDFMNdUMNZ8C1N/hoaEDK QWNO/A1SiS71idYFE82ZApjkjA+fl+1sDhIAR7mvfarFCnqgJW1HRiROLmlFHIzoStqr /lkA== X-Received: by 10.194.156.196 with SMTP id wg4mr45459827wjb.22.1360848438812; Thu, 14 Feb 2013 05:27:18 -0800 (PST) Received: from ithaqua.etoilebsd.net (ithaqua.etoilebsd.net. [37.59.37.188]) by mx.google.com with ESMTPS id ex1sm52215851wib.7.2013.02.14.05.27.16 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 14 Feb 2013 05:27:17 -0800 (PST) Sender: Baptiste Daroussin Date: Thu, 14 Feb 2013 14:27:15 +0100 From: Baptiste Daroussin To: Jamie Gritton Subject: Re: Marking some FS as jailable Message-ID: <20130214132715.GG44004@ithaqua.etoilebsd.net> References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="/Zw+/jwnNHcBRYYu" Content-Disposition: inline In-Reply-To: <511B1F55.3080500@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Feb 2013 13:27:20 -0000 --/Zw+/jwnNHcBRYYu Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: > On 02/12/13 12:40, Baptiste Daroussin wrote: > > Hi, > > > > I would like to mark some filesystem as jailable, here is the one I nee= d: > > linprocfs, tmpfs and fdescfs, I was planning to do it with adding a > > allow.mount.${fs} for each one. > > > > Anyone has an objection? > > > > regards, > > Bapt >=20 > Would it make sense for linprocfs to use the existing allow.mount.procfs > flag? Here is a patch that uses allow.mount.procfs for linsysfs and linprocfs. It also addd a new allow.mount.tmpfs to allow tmpfs. It seems to work here, can anyone confirm this is the right way to do it? I'll commit in 2 parts: first lin*fs, second tmpfs related things http://people.freebsd.org/~bapt/jail-fs.diff regards, Bapt --/Zw+/jwnNHcBRYYu Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEc5jIACgkQ8kTtMUmk6EyC2ACfWk8tYvAnJyD4XG9+4lHrCvRr LMoAnR4PQwxYOAknOa8tL368YlftWXaf =RkRX -----END PGP SIGNATURE----- --/Zw+/jwnNHcBRYYu-- From owner-freebsd-fs@FreeBSD.ORG Thu Feb 14 13:45:08 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E3B38530 for ; Thu, 14 Feb 2013 13:45:08 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id A34252F0 for ; Thu, 14 Feb 2013 13:45:08 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1U5z7U-0000dn-5e for freebsd-fs@freebsd.org; Thu, 14 Feb 2013 14:45:00 +0100 Received: from a83-161-216-224.adsl.xs4all.nl ([83.161.216.224] helo=ronaldradial) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1U5z7U-00020x-6u for freebsd-fs@freebsd.org; Thu, 14 Feb 2013 14:45:00 +0100 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: FreeBSD 9.1 ZFS-Related Crash References: Date: Thu, 14 Feb 2013 14:45:00 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.14 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: - X-Spam-Score: -1.9 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=disabled version=3.3.1 X-Scan-Signature: 739ba1b2be5fabc1cc6069058737919f X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Feb 2013 13:45:08 -0000 On Wed, 13 Feb 2013 20:09:25 +0100, Tim Gustafson wrote: >> Do you have something like serial console so you can capture the output? > > We do not, unfortunately. I'm not even sure this box has any serial > ports. The box does have an IPKVM management interface that we use > regularly, but I don't think that has any recording capability. Recording can be as simple as the possibility to scroll back in a terminal window. > I turned on crash dumps by adding this to my /etc/rc.conf: > > dumpdev="auto" > > Will crash dumps be helpful here? Yes. Make sure you have enough swap space for the dump. See 'man dumpon' and 'man savecore' for more information. Ronald. From owner-freebsd-fs@FreeBSD.ORG Thu Feb 14 14:41:10 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D2E0414A; Thu, 14 Feb 2013 14:41:10 +0000 (UTC) (envelope-from jamie@FreeBSD.org) Received: from m2.gritton.org (gritton.org [199.192.164.235]) by mx1.freebsd.org (Postfix) with ESMTP id B8F348BC; Thu, 14 Feb 2013 14:41:07 +0000 (UTC) Received: from glorfindel.gritton.org (c-174-52-130-157.hsd1.ut.comcast.net [174.52.130.157]) (authenticated bits=0) by m2.gritton.org (8.14.5/8.14.5) with ESMTP id r1EEf0EO094215; Thu, 14 Feb 2013 07:41:00 -0700 (MST) (envelope-from jamie@FreeBSD.org) Message-ID: <511CF77A.2080005@FreeBSD.org> Date: Thu, 14 Feb 2013 07:40:58 -0700 From: Jamie Gritton User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.24) Gecko/20120129 Thunderbird/3.1.16 MIME-Version: 1.0 To: Baptiste Daroussin Subject: Re: Marking some FS as jailable References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> <20130214132715.GG44004@ithaqua.etoilebsd.net> In-Reply-To: <20130214132715.GG44004@ithaqua.etoilebsd.net> Content-Type: multipart/mixed; boundary="------------040604050308040604010805" Cc: jail@FreeBSD.org, fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Feb 2013 14:41:10 -0000 This is a multi-part message in MIME format. --------------040604050308040604010805 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 02/14/13 06:27, Baptiste Daroussin wrote: > On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: >> On 02/12/13 12:40, Baptiste Daroussin wrote: >>> >>> I would like to mark some filesystem as jailable, here is the one I need: >>> linprocfs, tmpfs and fdescfs, I was planning to do it with adding a >>> allow.mount.${fs} for each one. >>> >>> Anyone has an objection? >> >> Would it make sense for linprocfs to use the existing allow.mount.procfs >> flag? > > Here is a patch that uses allow.mount.procfs for linsysfs and linprocfs. > > It also addd a new allow.mount.tmpfs to allow tmpfs. > > It seems to work here, can anyone confirm this is the right way to do it? > > I'll commit in 2 parts: first lin*fs, second tmpfs related things > > http://people.freebsd.org/~bapt/jail-fs.diff There are some problems. The usage on the mount side of things looks correct, but it needs more on the jail side. I'm including a patch just of that part, with a correction in jail.h and further changes in kern_jail.c - Jamie --------------040604050308040604010805 Content-Type: text/plain; name="jail-fs.diff" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="jail-fs.diff" Index: sys/jail.h =================================================================== --- sys/jail.h (revision 246791) +++ sys/jail.h (working copy) @@ -227,7 +227,8 @@ #define PR_ALLOW_MOUNT_NULLFS 0x0100 #define PR_ALLOW_MOUNT_ZFS 0x0200 #define PR_ALLOW_MOUNT_PROCFS 0x0400 -#define PR_ALLOW_ALL 0x07ff +#define PR_ALLOW_MOUNT_TMPFS 0x0800 +#define PR_ALLOW_ALL 0x0fff /* * OSD methods Index: kern/kern_jail.c =================================================================== --- kern/kern_jail.c (revision 246791) +++ kern/kern_jail.c (working copy) @@ -206,6 +206,7 @@ "allow.mount.nullfs", "allow.mount.zfs", "allow.mount.procfs", + "allow.mount.tmpfs", }; const size_t pr_allow_names_size = sizeof(pr_allow_names); @@ -221,6 +222,7 @@ "allow.mount.nonullfs", "allow.mount.nozfs", "allow.mount.noprocfs", + "allow.mount.notmpfs", }; const size_t pr_allow_nonames_size = sizeof(pr_allow_nonames); @@ -4208,6 +4210,10 @@ CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_MPSAFE, NULL, PR_ALLOW_MOUNT_PROCFS, sysctl_jail_default_allow, "I", "Processes in jail can mount the procfs file system"); +SYSCTL_PROC(_security_jail, OID_AUTO, mount_tmpfs_allowed, + CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_MPSAFE, + NULL, PR_ALLOW_MOUNT_TMPFS, sysctl_jail_default_allow, "I", + "Processes in jail can mount the tmpfs file system"); SYSCTL_PROC(_security_jail, OID_AUTO, mount_zfs_allowed, CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_MPSAFE, NULL, PR_ALLOW_MOUNT_ZFS, sysctl_jail_default_allow, "I", @@ -4360,6 +4366,8 @@ "B", "Jail may mount the nullfs file system"); SYSCTL_JAIL_PARAM(_allow_mount, procfs, CTLTYPE_INT | CTLFLAG_RW, "B", "Jail may mount the procfs file system"); +SYSCTL_JAIL_PARAM(_allow_mount, tmpfs, CTLTYPE_INT | CTLFLAG_RW, + "B", "Jail may mount the tmpfs file system"); SYSCTL_JAIL_PARAM(_allow_mount, zfs, CTLTYPE_INT | CTLFLAG_RW, "B", "Jail may mount the zfs file system"); --------------040604050308040604010805-- From owner-freebsd-fs@FreeBSD.ORG Thu Feb 14 14:56:11 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id DA7DE799; Thu, 14 Feb 2013 14:56:11 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wg0-x22a.google.com (mail-wg0-x22a.google.com [IPv6:2a00:1450:400c:c00::22a]) by mx1.freebsd.org (Postfix) with ESMTP id 2AD6996E; Thu, 14 Feb 2013 14:56:11 +0000 (UTC) Received: by mail-wg0-f42.google.com with SMTP id 12so62027wgh.5 for ; Thu, 14 Feb 2013 06:56:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=bfOdk5tXPIQBtQYGdv37IgMx4yh86gt2HwAC9Lgv9dQ=; b=nrEOAJg63xVH79KQkSMNHq2TqSCvjK6r6fa/q3Yk0UDMboSr4BS7l85VisERrWWv17 chwQprm7jbD31PrxGWYYGfgsa8zGgZjzHrcbOgFN6jnhhAGKGllzHDLtiAyoo+eBJRGV WTh9pYxLepmu8Js5HMm8GNhhRcU/kuG1ZYiK1EQcawCR5fg4fRPrWXHx3Q17UmI9bSsq nJha9DVKbOnS3azcOBxKG0/P2H4Xv6hhS5kiOw4OzIN4Bdfp54CzcrSaLwNjXczn4pwH RJ/qmGgV+S0hE8redQt3qI/LZR0NxnfKejtrwKT4Dnp+G9U3UE4BYnuMEMmjCkMwqVxP 9/7w== X-Received: by 10.180.105.195 with SMTP id go3mr17593518wib.13.1360853764297; Thu, 14 Feb 2013 06:56:04 -0800 (PST) Received: from ithaqua.etoilebsd.net (ithaqua.etoilebsd.net. [37.59.37.188]) by mx.google.com with ESMTPS id hb9sm48701439wib.3.2013.02.14.06.56.02 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 14 Feb 2013 06:56:03 -0800 (PST) Sender: Baptiste Daroussin Date: Thu, 14 Feb 2013 15:56:00 +0100 From: Baptiste Daroussin To: Jamie Gritton Subject: Re: Marking some FS as jailable Message-ID: <20130214145600.GI44004@ithaqua.etoilebsd.net> References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> <20130214132715.GG44004@ithaqua.etoilebsd.net> <511CF77A.2080005@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="svExV93C05KqedWb" Content-Disposition: inline In-Reply-To: <511CF77A.2080005@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: jail@FreeBSD.org, fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Feb 2013 14:56:11 -0000 --svExV93C05KqedWb Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Feb 14, 2013 at 07:40:58AM -0700, Jamie Gritton wrote: > On 02/14/13 06:27, Baptiste Daroussin wrote: > > On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: > >> On 02/12/13 12:40, Baptiste Daroussin wrote: > >>> > >>> I would like to mark some filesystem as jailable, here is the one I n= eed: > >>> linprocfs, tmpfs and fdescfs, I was planning to do it with adding a > >>> allow.mount.${fs} for each one. > >>> > >>> Anyone has an objection? > >> > >> Would it make sense for linprocfs to use the existing allow.mount.proc= fs > >> flag? > > > > Here is a patch that uses allow.mount.procfs for linsysfs and linprocfs. > > > > It also addd a new allow.mount.tmpfs to allow tmpfs. > > > > It seems to work here, can anyone confirm this is the right way to do i= t? > > > > I'll commit in 2 parts: first lin*fs, second tmpfs related things > > > > http://people.freebsd.org/~bapt/jail-fs.diff >=20 > There are some problems. The usage on the mount side of things looks > correct, but it needs more on the jail side. I'm including a patch just > of that part, with a correction in jail.h and further changes in kern_jai= l.c >=20 > - Jamie Thank you the patch has been updated with your fixes. regards Bapt --svExV93C05KqedWb Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEc+wAACgkQ8kTtMUmk6EyHigCff8gnZ9sdNZA9E0h5Cv1pJG6P 5FIAn2vpcpfWQKhQppv4HF9CjuTyJ6S8 =KvSM -----END PGP SIGNATURE----- --svExV93C05KqedWb-- From owner-freebsd-fs@FreeBSD.ORG Thu Feb 14 14:58:54 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 261EB856; Thu, 14 Feb 2013 14:58:54 +0000 (UTC) (envelope-from jamie@FreeBSD.org) Received: from m2.gritton.org (gritton.org [199.192.164.235]) by mx1.freebsd.org (Postfix) with ESMTP id ED98698D; Thu, 14 Feb 2013 14:58:53 +0000 (UTC) Received: from glorfindel.gritton.org (c-174-52-130-157.hsd1.ut.comcast.net [174.52.130.157]) (authenticated bits=0) by m2.gritton.org (8.14.5/8.14.5) with ESMTP id r1EEwqeL094374; Thu, 14 Feb 2013 07:58:53 -0700 (MST) (envelope-from jamie@FreeBSD.org) Message-ID: <511CFBAC.3000803@FreeBSD.org> Date: Thu, 14 Feb 2013 07:58:52 -0700 From: Jamie Gritton User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.24) Gecko/20120129 Thunderbird/3.1.16 MIME-Version: 1.0 To: Baptiste Daroussin Subject: Re: Marking some FS as jailable References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> <20130214132715.GG44004@ithaqua.etoilebsd.net> <511CF77A.2080005@FreeBSD.org> <20130214145600.GI44004@ithaqua.etoilebsd.net> In-Reply-To: <20130214145600.GI44004@ithaqua.etoilebsd.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: jail@FreeBSD.org, fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Feb 2013 14:58:54 -0000 On 02/14/13 07:56, Baptiste Daroussin wrote: > On Thu, Feb 14, 2013 at 07:40:58AM -0700, Jamie Gritton wrote: >> On 02/14/13 06:27, Baptiste Daroussin wrote: >>> On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: >>>> On 02/12/13 12:40, Baptiste Daroussin wrote: >>>>> >>>>> I would like to mark some filesystem as jailable, here is the one I need: >>>>> linprocfs, tmpfs and fdescfs, I was planning to do it with adding a >>>>> allow.mount.${fs} for each one. >>>>> >>>>> Anyone has an objection? >>>> >>>> Would it make sense for linprocfs to use the existing allow.mount.procfs >>>> flag? >>> >>> Here is a patch that uses allow.mount.procfs for linsysfs and linprocfs. >>> >>> It also addd a new allow.mount.tmpfs to allow tmpfs. >>> >>> It seems to work here, can anyone confirm this is the right way to do it? >>> >>> I'll commit in 2 parts: first lin*fs, second tmpfs related things >>> >>> http://people.freebsd.org/~bapt/jail-fs.diff >> >> There are some problems. The usage on the mount side of things looks >> correct, but it needs more on the jail side. I'm including a patch just >> of that part, with a correction in jail.h and further changes in kern_jail.c > > Thank you the patch has been updated with your fixes. One more bit (literally): PR_ALLOW_ALL in sys/jail.h needs updating. - Jamie From owner-freebsd-fs@FreeBSD.ORG Thu Feb 14 15:09:11 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id A263CBDA; Thu, 14 Feb 2013 15:09:11 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wi0-f174.google.com (mail-wi0-f174.google.com [209.85.212.174]) by mx1.freebsd.org (Postfix) with ESMTP id DF8B7A09; Thu, 14 Feb 2013 15:09:10 +0000 (UTC) Received: by mail-wi0-f174.google.com with SMTP id hi8so7172391wib.13 for ; Thu, 14 Feb 2013 07:09:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=EjYSivbjrOmN3F8+oUHPeRwNXWDtz77hoMamqLXRNhU=; b=nmXfpI4+Pmu7+MSdGWFA2di1A0P3K9crcn0U3euBoX9zj3Ltif0bAKTNI6CiXXzrtS U143lLvceeBy0j3utaFq/5L+98U+UdB6sSsWPQrFK6Rid5ho98SBn7lH0ryfV2vfeL8d J6BR+TWgwaMMi8bsZ2hWWCm+To2L+ou0K8Wtz/5Ftt2A7MFHg+UPHY+Q8Ft1FdpoJg7g pvfvRk4lvMUXF8gAC4NsA042ai7FBym08ZCKE9DVbMrlnNHwnoBFbh2mVDLllcuJUaaC TAfI5BG3XLNj+sBiZAx3ymUGv1m425XY0CkQ5IvBhglWj6g0tj9EgWxIR0qomopGw47I 6HyQ== X-Received: by 10.180.8.4 with SMTP id n4mr17681326wia.13.1360854541067; Thu, 14 Feb 2013 07:09:01 -0800 (PST) Received: from ithaqua.etoilebsd.net (ithaqua.etoilebsd.net. [37.59.37.188]) by mx.google.com with ESMTPS id fg6sm33086802wib.10.2013.02.14.07.08.59 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 14 Feb 2013 07:08:59 -0800 (PST) Sender: Baptiste Daroussin Date: Thu, 14 Feb 2013 16:08:57 +0100 From: Baptiste Daroussin To: Jamie Gritton Subject: Re: Marking some FS as jailable Message-ID: <20130214150857.GK44004@ithaqua.etoilebsd.net> References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> <20130214132715.GG44004@ithaqua.etoilebsd.net> <511CF77A.2080005@FreeBSD.org> <20130214145600.GI44004@ithaqua.etoilebsd.net> <511CFBAC.3000803@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="2feizKym29CxAecD" Content-Disposition: inline In-Reply-To: <511CFBAC.3000803@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: jail@FreeBSD.org, fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Feb 2013 15:09:11 -0000 --2feizKym29CxAecD Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Feb 14, 2013 at 07:58:52AM -0700, Jamie Gritton wrote: > On 02/14/13 07:56, Baptiste Daroussin wrote: > > On Thu, Feb 14, 2013 at 07:40:58AM -0700, Jamie Gritton wrote: > >> On 02/14/13 06:27, Baptiste Daroussin wrote: > >>> On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: > >>>> On 02/12/13 12:40, Baptiste Daroussin wrote: > >>>>> > >>>>> I would like to mark some filesystem as jailable, here is the one I= need: > >>>>> linprocfs, tmpfs and fdescfs, I was planning to do it with adding a > >>>>> allow.mount.${fs} for each one. > >>>>> > >>>>> Anyone has an objection? > >>>> > >>>> Would it make sense for linprocfs to use the existing allow.mount.pr= ocfs > >>>> flag? > >>> > >>> Here is a patch that uses allow.mount.procfs for linsysfs and linproc= fs. > >>> > >>> It also addd a new allow.mount.tmpfs to allow tmpfs. > >>> > >>> It seems to work here, can anyone confirm this is the right way to do= it? > >>> > >>> I'll commit in 2 parts: first lin*fs, second tmpfs related things > >>> > >>> http://people.freebsd.org/~bapt/jail-fs.diff > >> > >> There are some problems. The usage on the mount side of things looks > >> correct, but it needs more on the jail side. I'm including a patch just > >> of that part, with a correction in jail.h and further changes in kern_= jail.c > > > > Thank you the patch has been updated with your fixes. >=20 > One more bit (literally): PR_ALLOW_ALL in sys/jail.h needs updating. >=20 > - Jamie Fixed thanks Bapt --2feizKym29CxAecD Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEc/gkACgkQ8kTtMUmk6Ez32ACgn5dhl2qu4auCzE22o/4ojZ/K zlAAoLAlABbev6X7zOadrZCO+DJiusDU =PN4l -----END PGP SIGNATURE----- --2feizKym29CxAecD-- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 01:13:43 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 42963BE5 for ; Fri, 15 Feb 2013 01:13:43 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta01.emeryville.ca.mail.comcast.net (qmta01.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:43:76:96:30:16]) by mx1.freebsd.org (Postfix) with ESMTP id 1C8F48B0 for ; Fri, 15 Feb 2013 01:13:43 +0000 (UTC) Received: from omta05.emeryville.ca.mail.comcast.net ([76.96.30.43]) by qmta01.emeryville.ca.mail.comcast.net with comcast id 0NWJ1l00K0vp7WLA1RDij7; Fri, 15 Feb 2013 01:13:42 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta05.emeryville.ca.mail.comcast.net with comcast id 0RDh1l00T1t3BNj8RRDicS; Fri, 15 Feb 2013 01:13:42 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id A838973A1C; Thu, 14 Feb 2013 17:13:41 -0800 (PST) Date: Thu, 14 Feb 2013 17:13:41 -0800 From: Jeremy Chadwick To: fusionfoto@gmail.com Subject: Re: Is this an SSD problem or a controller problem? Message-ID: <20130215011341.GA96241@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1360890822; bh=ON6/bSWNq4u8XnJyOj6YSvku97vPlzIPb4iNBJpRQ1c=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=oIeiojVfFarfrZFNlqxCk77YpqFfv3V3CSeOxIzo5A/a+OV9FaLYfN4PM2e0EnB9R 2SDraGGtPH8vsz/RqPgNjHnQID+wZILht4ynfVsVIDnA0JOHpEJcIj9uRn+SCh8bAT YCn3ngtKBEFeIuvgEBqa308ctjM0AmOpvsup8Ghn11/d1VOCHG8RpYom9xFn0qXTWO mGUr4xtp1VZcdjyskidmyHdniJI9OGQBTwaeqd8FkYq7mAJUAdHQCUnNZFCAgAlvAO X6NJM568mLOF1ByMCNLkBl8pzeBgFiL4UBXeGVyBr5dn8GUaKGlAcOdpAMP8quzLQS nIjiG/NW7j4TQ== Cc: freebsd-fs@freebsd.org, mav@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 01:13:43 -0000 (Please keep me CC'd as I am not subscribed to this list) (Also CC'ing mav@ since he can shed some light on this too) Re: http://lists.freebsd.org/pipermail/freebsd-questions/2013-February/249183.html It is neither an SSD problem nor a controller problem. FreeBSD is issuing a specific ATA CDB command to the SSD, and the SSD rejects this request, returning ABRT status. This is perfectly normal per ATA specification; the "error" is harmless. You should open a PR on this matter, as FreeBSD should be adjusted in some manner to deal with this situation, either via appropriate workarounds or a drive quirk. mav@ would know what's best. You will need to provide output from the following commands in your PR: * dmesg * camcontrol identify ada15 * pciconf -lvcb * Same lines you did in your Email Further technical details, which you can put into the PR if you want: Looking at src/sys/cam/ata/ata_all.c we can see that the output of the ACB is in bytes, output per ata_cmd_string(). Thus: > ACB: ef 90 00 00 00 40 00 00 00 00 02 00 Decoding per T13/2015-D rev 3 (ATA8-ACS2) working draft spec: 0xef = command = SET FEATURES 0x90 = features = Disable use of SATA feature 0x00 0x00 0x00 = lba_* = n/a 0x40 = device = n/a 0x00 0x00 0x00 = lba_*_exp = n/a 0x00 = features_exp = n/a 0x02 = sector_count = Enable/Disable DMA Setup FIS Auto-Activate Optimisation 0x00 = sector_count_exp = n/a DMA Setup FIS is defined as: "7.50.16.3 Enable/Disable DMA Setup FIS Auto-Activate Optimization A Count field value of 02h is used to enable or disable DMA Setup FIS Auto-Activate optimization. See SATA 2.6 for more information. The enable/disable state for the auto-activate optimization shall be preserved across software reset. The enable/disable state for the auto-activate optimization shall be reset to its default state upon COMRESET." This feature has to do with NCQ capability for certain types of DMA transfers. src/sys/cam/ata/ata_xpt.c contains the responsible code. I could be wrong here (mav@ please correct me), but in probestart(), there is: 452 case PROBE_SETDMAAA: 453 cam_fill_ataio(ataio, 454 1, 455 probedone, 456 CAM_DIR_NONE, 457 0, 458 NULL, 459 0, 460 30*1000); 461 ata_28bit_cmd(ataio, ATA_SETFEATURES, 462 (softc->caps & CTS_SATA_CAPS_H_DMAAA) ? 0x10 : 0x90, 463 0, 0x02); 464 break; CTS_SATA_CAPS_H_DMAAA is defined per include/cam/cam_ccb.h as "Auto-activation", and its name implies DMA, so this would match the feature in question. This would explain why you see it when the machine boots (xpt(4) probe), as well as when smartctl is run or smartd starts (uses xpt(4)). However, I noticed this piece of code in probedone(): 739 /* 740 * Some HP SATA disks report supported DMA Auto-Activation, 741 * but return ABORT on attempt to enable it. 742 */ 743 } else if (softc->action == PROBE_SETDMAAA && 744 status == CAM_ATA_STATUS_ERROR) { 745 goto noerror; Which makes me scratch my head -- the comment and logic seems to imply there shouldn't be any error condition reported, but you do see one. This also implies that the drive advertises per SATA protocol DMA AA yet when xpt(4) tries to disable it the drive rejects that request with ABRT. I don't know why OCZ rejects disabling that feature, but whatever. Addendum note for mav@ -- we also need to add an ADA_Q_4K quirk entry to ata_da.c for Vertex 4 SSDs ("OCZ-VERTEX4"). -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 01:30:38 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id C6A8F265 for ; Fri, 15 Feb 2013 01:30:38 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta09.emeryville.ca.mail.comcast.net (qmta09.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:43:76:96:30:96]) by mx1.freebsd.org (Postfix) with ESMTP id 969BD95A for ; Fri, 15 Feb 2013 01:30:38 +0000 (UTC) Received: from omta07.emeryville.ca.mail.comcast.net ([76.96.30.59]) by qmta09.emeryville.ca.mail.comcast.net with comcast id 0R6C1l0011GXsucA9RWe8N; Fri, 15 Feb 2013 01:30:38 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta07.emeryville.ca.mail.comcast.net with comcast id 0RWd1l00Z1t3BNj8URWdZg; Fri, 15 Feb 2013 01:30:38 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 898BA73A1D; Thu, 14 Feb 2013 17:30:37 -0800 (PST) Date: Thu, 14 Feb 2013 17:30:37 -0800 From: Jeremy Chadwick To: fusionfoto@gmail.com Subject: Re: Is this an SSD problem or a controller problem? Message-ID: <20130215013037.GA97264@icarus.home.lan> References: <20130215011341.GA96241@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130215011341.GA96241@icarus.home.lan> User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1360891838; bh=iCo+se8LEh6UWYgan9gK7qNj0WbEo0565x0b9WZYi48=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=JaF97GFaSOopQzuNw6fW9KFnW60TTypK2fyQ/prfa3yFVd7xxZFbzYahyupPr3dqp gKap4DNMpXuZGqEQIlWJHnnhcikjodpPsDoPEtq+BmPtfgfUl0sr2Oo6fTnGODFHYO VFQ8sfI4m7rCcFW041DsBWkAuLiZop2O0gfPCz+4wESe87yvvnLefWtwNMOpzL6iCp /EUKFE8T10dKBIw/HCcPZ0eDuMp3r2kiGRbLXiUnaoku7vn4qMgkKJHu4GijaANUe2 0G6sZclxIU4WGUwcVbJWAQKDZR+aYgiuhtDa/3Q3rf9Y2uD8qBhfseizdP8ostmHgd SsPuoBcVNaRFQ== Cc: freebsd-fs@freebsd.org, mav@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 01:30:38 -0000 On Thu, Feb 14, 2013 at 05:13:41PM -0800, Jeremy Chadwick wrote: > (Please keep me CC'd as I am not subscribed to this list) > {snip} Oops, seems I CC'd the wrong list on this -- this should have gone to freebsd-questions@freebsd.org not freebsd-fs@freebsd.org. My mistake. -questions, -fs, -stable, blah blah blah... can't keep track. mav@ and/or FF, if you reply you may want to change the CC line back to freebsd-questions. Your call. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 01:49:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 50FD07C5; Fri, 15 Feb 2013 01:49:11 +0000 (UTC) (envelope-from prvs=175882b26e=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 9F27CA26; Fri, 15 Feb 2013 01:49:10 +0000 (UTC) Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50002209222.msg; Fri, 15 Feb 2013 01:49:02 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Fri, 15 Feb 2013 01:49:02 +0000 (not processed: message from valid local sender) X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=175882b26e=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: From: "Steven Hartland" To: "Jeremy Chadwick" , References: <20130215011341.GA96241@icarus.home.lan> Subject: Re: Is this an SSD problem or a controller problem? Date: Fri, 15 Feb 2013 01:49:13 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, mav@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 01:49:11 -0000 ----- Original Message ----- From: "Jeremy Chadwick" ... > Addendum note for mav@ -- we also need to add an ADA_Q_4K quirk entry to > ata_da.c for Vertex 4 SSDs ("OCZ-VERTEX4"). If you can send me the output from camcontrol identify for this drive I can sort this. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 09:00:30 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id AD1F11D8 for ; Fri, 15 Feb 2013 09:00:30 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: from mail-oa0-f51.google.com (mail-oa0-f51.google.com [209.85.219.51]) by mx1.freebsd.org (Postfix) with ESMTP id 8021BB0A for ; Fri, 15 Feb 2013 09:00:30 +0000 (UTC) Received: by mail-oa0-f51.google.com with SMTP id h2so3460241oag.24 for ; Fri, 15 Feb 2013 01:00:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=8tqkHSaTydycE50AOzvcBueHh30CqJNZJSRq75RG4Mk=; b=MiSUFVVaEXXKb32SPM8lx7Z4IAeYwWUZYZxzmeKNkrgLZ1OJw+RsgfY40qRCo8uajP OrGyTNUp94sk9wdzfoEZ9hEC3Zkl74WTCR5pGnC+Em1UP7kYk6kdNA6TC1LMsnG5ZGko oTSnrO6FCes43s79lLYCIQq9MhxdCfQohgXte7lMPfnpsByqwnWMqjLPWpHt4VG0O9mt bvocrG3tEh5Oc0y462BsHfcPP6KDm/55l+2GylVZLPV9wjUm6ukAoVvP/icRZxa10CuH kIRqVuGRuXZBf5xWJ04B1n6lhUrqwxroW0242rkc1FtQ2aLq/IUdi45GqfPmF9+oRz1t 9jRA== MIME-Version: 1.0 X-Received: by 10.60.24.135 with SMTP id u7mr1201133oef.90.1360918823663; Fri, 15 Feb 2013 01:00:23 -0800 (PST) Received: by 10.60.146.203 with HTTP; Fri, 15 Feb 2013 01:00:23 -0800 (PST) Date: Fri, 15 Feb 2013 04:00:23 -0500 Message-ID: Subject: Crazy ZFS ZIL options: md(4) umass(4) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 09:00:30 -0000 I have thousands of small files being written most under 8KiB, they either end up being removed or combined in various ways to produce a set of data that is stored long term. I also have tens of 10-50MiB files similarly, but rarely. It's not fully clear to me the benefits of a split ZIL. Some say a split ZIL will ward off some fragmentation, which pushing over 80% I'm sure to see otherwise. Plus a speed boost if on faster media. And maybe even no need to commit some ZIL to disk as small files are removed before ZFS decides to aggregate? Anyway, use case aside, I can put 1GiB of ram as ZIL.Same for 32GiB USB. RAM is obviously fast and power fail prone. USB is slow and power safe. Either could be mirrored, 2xRAM, 2xUSB. - If I lose power on RAM, will the disk still be consistent? - What data integrity does ZIL have? None? ZFS dataset's sha256? - Any production experiences with this crazy ideas? Thx. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 09:10:25 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id EA52183D for ; Fri, 15 Feb 2013 09:10:25 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: from mail-ob0-f172.google.com (mail-ob0-f172.google.com [209.85.214.172]) by mx1.freebsd.org (Postfix) with ESMTP id BE8DBBFE for ; Fri, 15 Feb 2013 09:10:25 +0000 (UTC) Received: by mail-ob0-f172.google.com with SMTP id tb18so3373104obb.17 for ; Fri, 15 Feb 2013 01:10:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=bYE1Q32sT9yzWfVbp1Owry0vEOOMCb6zm4GmukQ+e/E=; b=P6l8wJekbdCKjuX+DR+AHXlO4P2jh3V6Yrng9rvjCGuW7E9VDbfxOWMS7ZYMz8w7yh bay80MEAyqIjOWHX+pjfj+lk5qfDvbfSYKAwgwHtN2lkNLdcgs3ZxX66XiaPKDl86Qmm 9IqwigbeSRR3n2rBprTCD9hfe34ttXiaoHQzEy6PFRypibNzzRl3Ojvo4B5N1EASYw7h CyAcrYDEetYl9ruvzmSVpIutc3x0xadwpSITLI1ZOPz1X7sf0wyt20yxQ0tOCp++JIm4 6OnPUWqKVmU79zfclc/vScuTEUaui6r44gyiZekp7r/U9bZrEim69EvgTqDYDb5/O/xs 114A== MIME-Version: 1.0 X-Received: by 10.182.182.101 with SMTP id ed5mr1283491obc.23.1360919419483; Fri, 15 Feb 2013 01:10:19 -0800 (PST) Received: by 10.60.146.203 with HTTP; Fri, 15 Feb 2013 01:10:19 -0800 (PST) In-Reply-To: References: Date: Fri, 15 Feb 2013 04:10:19 -0500 Message-ID: Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 09:10:26 -0000 I would GELI on the USB as well. From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 09:16:14 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E6E4B8E7 for ; Fri, 15 Feb 2013 09:16:14 +0000 (UTC) (envelope-from delphij@gmail.com) Received: from mail-lb0-f179.google.com (mail-lb0-f179.google.com [209.85.217.179]) by mx1.freebsd.org (Postfix) with ESMTP id 50E1AC2C for ; Fri, 15 Feb 2013 09:16:14 +0000 (UTC) Received: by mail-lb0-f179.google.com with SMTP id j14so2397869lbo.24 for ; Fri, 15 Feb 2013 01:16:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=RvcBlzpu8O3uE5Aya3QystKNScgFe3rpGZ3apDY/kxA=; b=f5AzOHRWUcr0Dxv0ggDfBva1wGhMbCy5Gfv3SIRGX3vSM0ttj5oJ7DgrvNXD3zP3gf 2LR9YWTHig9+FotcureUqs1uv0Bw4YF/LDadhFe9BkWWdDPBrqHz8z39uS7Eh8Qo5Ezf F4r5GV7YfRwSD9W0TRaTPayhkHsER+/ifV2JgfCdlM39gMCe6o8laSyfLdlZk2J54guO w+/rTu8Ioe0hUekotEJSKiaAvAE2hTtOhN2OBnqGZwLGjrDMZOVoZix2Mi/8k/v8rwd+ TYISuQNsQyKfEq3f28xpGUoNsTGY/CFdj8HsHjwErpcOs5Kb3m6kY7MUat1XwRdEFiNZ 2ySg== MIME-Version: 1.0 X-Received: by 10.112.88.10 with SMTP id bc10mr1798878lbb.70.1360919773085; Fri, 15 Feb 2013 01:16:13 -0800 (PST) Received: by 10.114.29.165 with HTTP; Fri, 15 Feb 2013 01:16:12 -0800 (PST) In-Reply-To: References: Date: Fri, 15 Feb 2013 01:16:12 -0800 Message-ID: Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) From: Xin LI To: grarpamp Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 09:16:15 -0000 On Fri, Feb 15, 2013 at 1:00 AM, grarpamp wrote: > I have thousands of small files being written most under 8KiB, they > either end up being removed or combined in various ways to > produce a set of data that is stored long term. I also have > tens of 10-50MiB files similarly, but rarely. It's not fully clear > to me the benefits of a split ZIL. Some say a split ZIL will ward > off some fragmentation, which pushing over 80% I'm sure to see > otherwise. Plus a speed boost if on faster media. And maybe > even no need to commit some ZIL to disk as small files are > removed before ZFS decides to aggregate? > > Anyway, use case aside, I can put 1GiB of ram as ZIL.Same for 32GiB USB. > RAM is obviously fast and power fail prone. > USB is slow and power safe. > Either could be mirrored, 2xRAM, 2xUSB. > > - If I lose power on RAM, will the disk still be consistent? > - What data integrity does ZIL have? None? ZFS dataset's sha256? > - Any production experiences with this crazy ideas? Why don't just set sync=disabled? Cheers, -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 09:23:14 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id C8A9C9AA for ; Fri, 15 Feb 2013 09:23:14 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: from mail-wg0-f51.google.com (mail-wg0-f51.google.com [74.125.82.51]) by mx1.freebsd.org (Postfix) with ESMTP id 68666CC7 for ; Fri, 15 Feb 2013 09:23:13 +0000 (UTC) Received: by mail-wg0-f51.google.com with SMTP id 8so2639290wgl.30 for ; Fri, 15 Feb 2013 01:23:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=e6nO3/XcbVnVp2wlooRSYBopqOc/W9u6w2NfllA9vLg=; b=drjpWMgDvpykh7QPJrYCyPCHGzXElQzEdib1emveEkxYVF76NJquNlAyIdAQrSsY14 ea1OlCAlImViLX1S2Lvgi7X30VdnwmlCHRyIkoJ24w5AcYQ54OG6NKzppjmIN1v0DtpI 1z2IpOeGAsrkZZouxjJq6P/nB88Ldm0dobY7/RZdiCDr9XOHHZhDsYxcTokABwXGz8Da p/zwI+lG6KaFR9y/uOn6M7lDH0xCINyRdEWY3w3UjpU+tJAfg3ddcNSmrWlCBeMrbYzi d/Us45NIwBFijj4X6YR6bvMTRexgx/xQIvF1V5jS4XjU0IJcShyRRtegFJ4dnp22h598 Lfvg== MIME-Version: 1.0 X-Received: by 10.180.93.168 with SMTP id cv8mr2775810wib.5.1360920192843; Fri, 15 Feb 2013 01:23:12 -0800 (PST) Received: by 10.194.44.42 with HTTP; Fri, 15 Feb 2013 01:23:12 -0800 (PST) In-Reply-To: References: Date: Fri, 15 Feb 2013 03:23:12 -0600 Message-ID: Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) From: Adam Vande More To: grarpamp Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 09:23:14 -0000 On Fri, Feb 15, 2013 at 3:00 AM, grarpamp wrote: > I have thousands of small files being written most under 8KiB, they > either end up being removed or combined in various ways to > produce a set of data that is stored long term. I also have > tens of 10-50MiB files similarly, but rarely. It's not fully clear > to me the benefits of a split ZIL. What is split ZIL? -- Adam Vande More From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 09:24:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4B2B9A29 for ; Fri, 15 Feb 2013 09:24:57 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id AD256CD6 for ; Fri, 15 Feb 2013 09:24:56 +0000 (UTC) Received: from [10.5.0.53] ([87.250.40.22]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id r1F9A88H032270 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Fri, 15 Feb 2013 11:10:08 +0200 (EET) (envelope-from daniel@digsys.bg) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) From: Daniel Kalchev In-Reply-To: Date: Fri, 15 Feb 2013 10:10:09 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <43BC680B-4FC6-4CDB-A590-12ACC78959B7@digsys.bg> References: To: grarpamp X-Mailer: Apple Mail (2.1499) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 09:24:57 -0000 In short, don't. ZIL is not a cache. ZIL is there for you to recover transactions in case = of a crash. It is your safety net. If you don't have low latency SSD, put it on separate spinning disk, but = by all means never on RAM and USB only as a last resort, because USB is = slow for most uses and it is also extremely unreliable. Use the RAM for ARC, it will provide more performance. Daniel On Feb 15, 2013, at 10:00 AM, grarpamp wrote: > I have thousands of small files being written most under 8KiB, they > either end up being removed or combined in various ways to > produce a set of data that is stored long term. I also have > tens of 10-50MiB files similarly, but rarely. It's not fully clear > to me the benefits of a split ZIL. Some say a split ZIL will ward > off some fragmentation, which pushing over 80% I'm sure to see > otherwise. Plus a speed boost if on faster media. And maybe > even no need to commit some ZIL to disk as small files are > removed before ZFS decides to aggregate? >=20 > Anyway, use case aside, I can put 1GiB of ram as ZIL.Same for 32GiB = USB. > RAM is obviously fast and power fail prone. > USB is slow and power safe. > Either could be mirrored, 2xRAM, 2xUSB. >=20 > - If I lose power on RAM, will the disk still be consistent? > - What data integrity does ZIL have? None? ZFS dataset's sha256? > - Any production experiences with this crazy ideas? >=20 > Thx. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 09:34:52 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 92F45B12 for ; Fri, 15 Feb 2013 09:34:52 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.186]) by mx1.freebsd.org (Postfix) with ESMTP id 2FB1FD24 for ; Fri, 15 Feb 2013 09:34:51 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mreu3) with ESMTP (Nemesis) id 0MMcRu-1Ty4Db13PC-008FbS; Fri, 15 Feb 2013 10:34:45 +0100 Message-ID: <511E0133.9010600@brockmann-consult.de> Date: Fri, 15 Feb 2013 10:34:43 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) References: <43BC680B-4FC6-4CDB-A590-12ACC78959B7@digsys.bg> In-Reply-To: <43BC680B-4FC6-4CDB-A590-12ACC78959B7@digsys.bg> X-Enigmail-Version: 1.5 X-Provags-ID: V02:K0:Darbq9wIRF2yVwnlHvQ144peS5k8f5LOOojlVMyEihp Gw5cjMAKxbbRKHWcAt5DFlgGmUe8RWHfLhvezIVCAPU7tsfjb8 dDbHXgMSJ3HuQM5D/04q2BSPJkDw6qSfv8v99bXGVbhB7tAHa6 RtHQuBfdDiNBSnBv1/u793AguljJ/RsnyIpxyiwEmW6x0+5/XN +gJttIYrKT/UFqbLCQhuRrv0fyCpNF9ZB6Rgie+6O70GSAFxla jy4RrzVWGx6+eY16wgdZP/MZGg6m9Vr/wUPJpbl3amQ9ZEpFUA B/MkTamntenwpWDitR3EhQR8EXMC6UGQ0q7QoL/Mtnk9+Tckp5 TfomnhLFP2qfCt8bMOhA3cBrPZxZL9wzPYoHRW9a7 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 09:34:52 -0000 On 2013-02-15 10:10, Daniel Kalchev wrote: > In short, don't. > > ZIL is not a cache. ZIL is there for you to recover transactions in case of a crash. It is your safety net. > If you don't have low latency SSD, put it on separate spinning disk, but by all means never on RAM and USB only as a last resort, because USB is slow for most uses and it is also extremely unreliable. > > Use the RAM for ARC, it will provide more performance. > > Daniel I second that. It may even harm you because some transactions are on the RAM zil and will have to be rolled back to get the pool online, using options like: -F Recovery mode for a non-importable pool. Attempt to return the pool to an importable state by *discarding the last few** ** transactions*. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost. This option is ignored if the pool is importable or already imported. If you lose power on the RAM, your ram ZIL is useless (it was only written, never read... just wasted memory). ZIL is only read for recovering from an interruption. Think of it like this... random writes come from an app, get written sequentially on the ZIL (fast on disk since it's written sequentially but fragmented since it's actually random writes) and also saved in RAM. Then as the system has time, the changes are saved on disk from RAM, so it's like a slightly slow non-volatile RAM cache. If the system crashes, the RAM is lost (so is your RAM ZIL), but the fragmented ZIL slog (sequential log) is safe, and used to recover what was lost from RAM. I've crashed my FreeBSD boxes (with faulty ZIL disk firmware) and the ZIL shows checksum errors when it starts, and self-heals from the mirror; it must have some data integrity just like the pool. I don't know the details. > > On Feb 15, 2013, at 10:00 AM, grarpamp wrote: > >> I have thousands of small files being written most under 8KiB, they >> either end up being removed or combined in various ways to >> produce a set of data that is stored long term. I also have >> tens of 10-50MiB files similarly, but rarely. It's not fully clear >> to me the benefits of a split ZIL. Some say a split ZIL will ward >> off some fragmentation, which pushing over 80% I'm sure to see >> otherwise. Plus a speed boost if on faster media. And maybe >> even no need to commit some ZIL to disk as small files are >> removed before ZFS decides to aggregate? >> >> Anyway, use case aside, I can put 1GiB of ram as ZIL.Same for 32GiB USB. >> RAM is obviously fast and power fail prone. >> USB is slow and power safe. >> Either could be mirrored, 2xRAM, 2xUSB. >> >> - If I lose power on RAM, will the disk still be consistent? >> - What data integrity does ZIL have? None? ZFS dataset's sha256? >> - Any production experiences with this crazy ideas? >> >> Thx. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de -------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 10:08:41 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9072C5F0 for ; Fri, 15 Feb 2013 10:08:41 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: from mail-ob0-f179.google.com (mail-ob0-f179.google.com [209.85.214.179]) by mx1.freebsd.org (Postfix) with ESMTP id 63494E74 for ; Fri, 15 Feb 2013 10:08:41 +0000 (UTC) Received: by mail-ob0-f179.google.com with SMTP id un3so3371177obb.24 for ; Fri, 15 Feb 2013 02:08:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=roeYBqcXmP9U9s6fy2Xa5Aj34u9VXWq7UKT7Pud/cYg=; b=NsGbMi/J3BjafbEx6xolVuICAy50VFafB10r38HV5j6elFx+o+e065BcVmHaw8pcRh RH+/U3tWhi/iGrN0FxL7H634YiXYcGmwYf4FG+nrCENCHVqfkJU0ELaWR0rs1KCWp2o9 pwIPPBhQot2of4dwDCYL7j23Btpga+IiyDGyJxNInCHOBSpDl+Jw2GU0wqlKnffuyHcE gic7eVGFfAS+MyTXww1SqpKBqryxLdZWzKeRX6RK+L7iK6geM9w8kgErhDxSCipStMHB PqK24CUub115vd7CmCQBJ/axv4wzobavfP6q4EIkyisRyt23x1FTUP3MdzqOSjawMB63 wuPg== MIME-Version: 1.0 X-Received: by 10.60.30.38 with SMTP id p6mr1400882oeh.2.1360922915348; Fri, 15 Feb 2013 02:08:35 -0800 (PST) Received: by 10.60.146.203 with HTTP; Fri, 15 Feb 2013 02:08:35 -0800 (PST) In-Reply-To: References: Date: Fri, 15 Feb 2013 05:08:35 -0500 Message-ID: Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 10:08:41 -0000 > ZIL is there for you to recover transactions in case of a crash. > It is your safety net. I always thought the ZIL was pushed out safely. So that still no matter what the disk would be consist [1]. Like when crash you just lose the ZIL's since the last ZIL push. Which odds are will be just work product here. > Use the RAM for ARC, it will provide more performance. But about reducing fragmentation without separate ZIL. I'm admittedly over full and will need to move data to new pool anyway. Just that with ZIL in main pool what article I read says problem can mostly come back without separate zil. I tend to run full till annoyed to redesign, bad habit. > If you don't have low latency SSD, put it on separate spinning disk This may be best. But ties to [1] above.. if ZIL spindle (even mirrored) dies, what is the point of what ZIL is on if ZIL separate from main pool is unsafe anyways. > USB ... unreliable I would have to test USB bus and devs for stablility. But I do have local lifetime warranty on 32GiB devices :) So maybe mirror 2 of them, or 2x2. Due to crypto I only get 7-15MiB/s on spindle anyway. > Why don't just set sync=disabled? Hmm, that might give performance as on any FS, but unsure about lesser fragmentation. I should find something to read on that. > What is split ZIL? Separate from main pool (spindles), 'zpool ... log ...' This may become an experiment :) From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 11:01:30 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 7517E61C for ; Fri, 15 Feb 2013 11:01:30 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id D6FA9189 for ; Fri, 15 Feb 2013 11:01:29 +0000 (UTC) Received: from [10.5.0.53] ([87.250.40.22]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id r1FB1NM1036329 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Fri, 15 Feb 2013 13:01:23 +0200 (EET) (envelope-from daniel@digsys.bg) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) From: Daniel Kalchev In-Reply-To: Date: Fri, 15 Feb 2013 12:01:23 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: To: grarpamp X-Mailer: Apple Mail (2.1499) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 11:01:30 -0000 Some further clarification. ZIL is only read on crash. If you shut down properly etc, the ZIL will = be never read. For most of it's lifetime, the ZIL is write-only safety = net. An insurance, if you wish. Because it is write-only, cheap USB FLASH devices are the poorest = candidates. USB FLASH devices, even the most "high performance" are = mediocre at best. Because SLOG (separate ZIL) is written sequentially, = even the slowest spinning disk has higher throughput. You want = "write-optimised" device for the SLOG, which no USB FLASH device is, = ever. By the way, you may have better luck with professional photography = write-optimised CF cards, that are capable of recording large amounts of = huge raw images in a burst -- but these typically are more expensive = than an write-optimised SSD and you don't really need much capacity. But = again: forget about USB if you don't enjoy problems. :) By separating the ZIL, you avoid fragmentation, yes. This is because = in-pool ZIL uses variable length records, that are then removed, leaving = variable length holes. ZFS tries to work around this, but it's never = perfect. If you use separate mirrored disks for the ZIL, you will=20 - keep writing sequential (no seeks from interfering reads); - have a reliable recovery medium for the ZIL as long as at least one of = the drives survives the outage. If you want ultimate performance, the solution is battery backed RAM = drives for ZIL. Daniel From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 12:08:52 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 153F6A70 for ; Fri, 15 Feb 2013 12:08:52 +0000 (UTC) (envelope-from shuriku@shurik.kiev.ua) Received: from graal.it-profi.org.ua (graal.shurik.kiev.ua [193.239.74.7]) by mx1.freebsd.org (Postfix) with ESMTP id A7E63776 for ; Fri, 15 Feb 2013 12:08:50 +0000 (UTC) Received: from [217.76.201.82] (helo=thinkpad.it-profi.org.ua) by graal.it-profi.org.ua with esmtpsa (TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1U6JVI-000MyB-Uc for freebsd-fs@freebsd.org; Fri, 15 Feb 2013 13:31:08 +0200 Message-ID: <511E1C6B.50101@shurik.kiev.ua> Date: Fri, 15 Feb 2013 13:30:51 +0200 From: Alexandr Krivulya User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130210 Thunderbird/17.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 217.76.201.82 X-SA-Exim-Mail-From: shuriku@shurik.kiev.ua X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on graal.it-profi.org.ua X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: error destroying zfs filesystem X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on graal.it-profi.org.ua) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 12:08:52 -0000 Hello everyone! After upgrading my zfs-only system from 8.2 to 9.1 I have many errors related to zfs in my /var/log/messages: Feb 15 13:12:44 gw kernel: metaslab_free_dva(): bad DVA 0:264842321920Solaris: WARNING: metaslab_free_dva(): bad DVA 0:338480095232 Feb 15 13:12:44 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad DVA 0:277633901056Solaris: WARNING: Feb 15 13:12:45 gw kernel: metaslab_free_dva(): bad DVA 0:277263710208Solaris: WARNING: metaslab_free_dva(): bad DVA 0:277633606144Solaris: WARNING: metaslab_free_dva(): bad DVA 0:278349642240Solaris: WARNING: metaslab_free_dva(): bad DVA 0:278429099008Solaris: WARNING: metaslab_free_dva(): bad DVA 0:278349926400Solaris: WARNING: metaslab_free_dva(): bad DVA 0:278245378560Solaris: WARNING: metaslab_free_dva(): bad DVA 0:256838777344Solaris: WARNING: metaslab_free_dva(): bad DVA 0:327364684800 Feb 15 13:12:45 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad DVA 0:312373604864 root@gw:/ # zpool status -v pool: zmirror state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 1h39m with 1 errors on Thu Feb 14 17:48:53 2013 config: NAME STATE READ WRITE CKSUM zmirror ONLINE 0 0 2 mirror-0 ONLINE 0 0 8 gpt/disk01 ONLINE 0 0 8 gpt/disk02 ONLINE 0 0 8 errors: Permanent errors have been detected in the following files: zmirror/usr:<0x0> <0xc8>:<0x0> zfs clear and zfs scrub didn't help me, so I have created a new filesystem zmirror/usr2 with mountpoint=/usr and copy all files manually from zmirror/usr to zmirror/usr2 because zfs send failed with i/o error. The system whith new /usr boots fine and now I try to remove broken zmirror/usr, but still no luck: root@gw:/ # zfs destroy zmirror/usr internal error: Unknown error: 122 Аварийное завершение(core dumped) How can I solve this issue? From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 12:44:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 08E1967C for ; Fri, 15 Feb 2013 12:44:54 +0000 (UTC) (envelope-from alexandr.kovalenko@gmail.com) Received: from mail-ie0-x231.google.com (ie-in-x0231.1e100.net [IPv6:2607:f8b0:4001:c03::231]) by mx1.freebsd.org (Postfix) with ESMTP id D5A4A95F for ; Fri, 15 Feb 2013 12:44:53 +0000 (UTC) Received: by mail-ie0-f177.google.com with SMTP id 16so4680205iea.22 for ; Fri, 15 Feb 2013 04:44:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=iPVEYjMqiptT3QgB/AMvBWj4c4RDfAqH3WFmjrRhdJw=; b=XgLubv8o49WGMUFQBEHspF5npTAEC7i7ynZTEj3QNzZxKz14m2HJtmkXiNppfCJlKR MnaHXLKTir9XS654faSNr/erBYu0MffKRM5k4bBbjN4bpsmRD/1A580w06/51j53sZOh A9tmJnxBpsYsvcOlFyYv5l2ofZk6ksBgLIuEnW7dQvoC6Q6ruJ7un/rx5eLA19lsqUUe 0hLPL4dxwm4Nua0Xa9UXkk2HXP/edEjzSdJECYkJ1r0g7wND/jX/J67xxaai86BzC+YK 0SbHi5/qn7QEaHGJFfkCUoqYWT5wOjNMjvsNHkXXWng2oMMuOewfvoxUpt9+YeM02nyK ollw== MIME-Version: 1.0 X-Received: by 10.42.28.130 with SMTP id n2mr1235284icc.6.1360932279900; Fri, 15 Feb 2013 04:44:39 -0800 (PST) Received: by 10.50.183.228 with HTTP; Fri, 15 Feb 2013 04:44:39 -0800 (PST) In-Reply-To: <511E1C6B.50101@shurik.kiev.ua> References: <511E1C6B.50101@shurik.kiev.ua> Date: Fri, 15 Feb 2013 12:44:39 +0000 Message-ID: Subject: Re: error destroying zfs filesystem From: Alexandr Kovalenko To: Alexandr Krivulya Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 12:44:54 -0000 On Fri, Feb 15, 2013 at 11:30 AM, Alexandr Krivulya wrote: > Hello everyone! > > After upgrading my zfs-only system from 8.2 to 9.1 I have many errors > related to zfs in my /var/log/messages: > > Feb 15 13:12:44 gw kernel: metaslab_free_dva(): bad DVA > 0:264842321920Solaris: WARNING: metaslab_free_dva(): bad DVA 0:338480095232 > Feb 15 13:12:44 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad > DVA 0:277633901056Solaris: WARNING: > Feb 15 13:12:45 gw kernel: metaslab_free_dva(): bad DVA > 0:277263710208Solaris: WARNING: metaslab_free_dva(): bad DVA > 0:277633606144Solaris: WARNING: metaslab_free_dva(): bad DVA > 0:278349642240Solaris: WARNING: metaslab_free_dva(): bad DVA > 0:278429099008Solaris: WARNING: metaslab_free_dva(): bad DVA > 0:278349926400Solaris: WARNING: metaslab_free_dva(): bad DVA > 0:278245378560Solaris: WARNING: metaslab_free_dva(): bad DVA > 0:256838777344Solaris: WARNING: metaslab_free_dva(): bad DVA 0:327364684800 > Feb 15 13:12:45 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad > DVA 0:312373604864 > > root@gw:/ # zpool status -v > pool: zmirror > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://illumos.org/msg/ZFS-8000-8A > scan: scrub repaired 0 in 1h39m with 1 errors on Thu Feb 14 17:48:53 2013 > config: > > NAME STATE READ WRITE CKSUM > zmirror ONLINE 0 0 2 > mirror-0 ONLINE 0 0 8 > gpt/disk01 ONLINE 0 0 8 > gpt/disk02 ONLINE 0 0 8 > > errors: Permanent errors have been detected in the following files: > > zmirror/usr:<0x0> > <0xc8>:<0x0> [dd] > How can I solve this issue? Make smartctl -t long /dev/ and then take a look if there any pending sectors/errors in output of smartctl -a /dev/ ? (for both of drives used) -- Alexandr Kovalenko http://uafug.org.ua/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 12:57:25 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 6820695A for ; Fri, 15 Feb 2013 12:57:25 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.187]) by mx1.freebsd.org (Postfix) with ESMTP id EAE41A19 for ; Fri, 15 Feb 2013 12:57:24 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mrbap2) with ESMTP (Nemesis) id 0M5IbP-1UpMWi2hLj-00zBfo; Fri, 15 Feb 2013 13:57:23 +0100 Message-ID: <511E30B3.2070302@brockmann-consult.de> Date: Fri, 15 Feb 2013 13:57:23 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: error destroying zfs filesystem References: <511E1C6B.50101@shurik.kiev.ua> In-Reply-To: X-Enigmail-Version: 1.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:dRaAKQIEe1L0VBr1BxQkA/r5PEXnmq4LdFoGHCtR4SC AoMNQTADj+/1b31kGYJfTBGEAQVpGhS4meyJyAX28Vwr7ib2nL g36dy3wr+4DsJxmVUfALBZafuSCJ8xJ+ICtL/L+MlSLQMNCKU2 Sb3XAWNbq3Iizny4swslO2zjlaqjuWPqA5c5h2V+/5vBM7+i02 0rk649GpVIx50KYLC2lztAWFJKdFjau2fpbUeJ17vRHwJjPARn GH1BoXjZ2EJgaYQbXxMTXFAWgVpVG0p8RqoCHjBk1S1XjXKe/N 4CwC3Muc46ACj0gRnebF0+lLDBswjNw8cEBpHtwNkTr942b9q7 LxDEOw8rj3hbFlyDgNyYY+rBo6GSRNrJF63YAVl2L X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 12:57:25 -0000 On 2013-02-15 13:44, Alexandr Kovalenko wrote: > On Fri, Feb 15, 2013 at 11:30 AM, Alexandr Krivulya > wrote: >> Hello everyone! >> >> After upgrading my zfs-only system from 8.2 to 9.1 I have many errors >> related to zfs in my /var/log/messages: >> >> Feb 15 13:12:44 gw kernel: metaslab_free_dva(): bad DVA >> 0:264842321920Solaris: WARNING: metaslab_free_dva(): bad DVA 0:338480095232 >> Feb 15 13:12:44 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad >> DVA 0:277633901056Solaris: WARNING: >> Feb 15 13:12:45 gw kernel: metaslab_free_dva(): bad DVA >> 0:277263710208Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:277633606144Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278349642240Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278429099008Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278349926400Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278245378560Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:256838777344Solaris: WARNING: metaslab_free_dva(): bad DVA 0:327364684800 >> Feb 15 13:12:45 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad >> DVA 0:312373604864 >> >> root@gw:/ # zpool status -v >> pool: zmirror >> state: ONLINE >> status: One or more devices has experienced an error resulting in data >> corruption. Applications may be affected. >> action: Restore the file in question if possible. Otherwise restore the >> entire pool from backup. >> see: http://illumos.org/msg/ZFS-8000-8A >> scan: scrub repaired 0 in 1h39m with 1 errors on Thu Feb 14 17:48:53 2013 >> config: >> >> NAME STATE READ WRITE CKSUM >> zmirror ONLINE 0 0 2 >> mirror-0 ONLINE 0 0 8 >> gpt/disk01 ONLINE 0 0 8 >> gpt/disk02 ONLINE 0 0 8 >> >> errors: Permanent errors have been detected in the following files: >> >> zmirror/usr:<0x0> >> <0xc8>:<0x0> > [dd] >> How can I solve this issue? > Make smartctl -t long /dev/ and then take a > look if there any pending sectors/errors in output of smartctl -a > /dev/ ? (for both of drives used) > You could also try going in /usr and "rm" or "truncate" some files until the "Permanent errors have been detected" list is empty. And this assumes you already ran a full scrub, which you must do to remove the files. -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de -------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 15:15:01 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 003BC655 for ; Fri, 15 Feb 2013 15:15:00 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id BF37123C for ; Fri, 15 Feb 2013 15:14:59 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id r1FF8kxc008152; Fri, 15 Feb 2013 09:08:50 -0600 (CST) Date: Fri, 15 Feb 2013 09:08:46 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Daniel Kalchev Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) In-Reply-To: Message-ID: References: User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Fri, 15 Feb 2013 09:08:51 -0600 (CST) Cc: freebsd-fs@freebsd.org, grarpamp X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 15:15:01 -0000 On Fri, 15 Feb 2013, Daniel Kalchev wrote: > Some further clarification. > > ZIL is only read on crash. If you shut down properly etc, the ZIL > will be never read. For most of it's lifetime, the ZIL is write-only > safety net. An insurance, if you wish. Something I did not see mentioned in this discussion thread is that the ZIL is only used for synchronous writes. Database writes and NFS writes are usually synchronous writes. Systems which are not used for these purposes might not produce any synchronous writes and so the ZIL is not used at all. As long as the pool disks obey cache sync requests, the integrity of the pool is assured. The integrity of the pool is not dependent on data in the ZIL. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 16:14:24 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 9E336EB7 for ; Fri, 15 Feb 2013 16:14:24 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay03.ispgateway.de (smtprelay03.ispgateway.de [80.67.29.7]) by mx1.freebsd.org (Postfix) with ESMTP id 354D670F for ; Fri, 15 Feb 2013 16:14:24 +0000 (UTC) Received: from [78.35.132.100] (helo=fabiankeil.de) by smtprelay03.ispgateway.de with esmtpsa (SSLv3:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1U6NvU-0005MD-F4; Fri, 15 Feb 2013 17:14:16 +0100 Date: Fri, 15 Feb 2013 17:11:44 +0100 From: Fabian Keil To: grarpamp Subject: Re: Crazy ZFS ZIL options: md(4) umass(4) Message-ID: <20130215171144.710bf9af@fabiankeil.de> In-Reply-To: References: Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/6VHHzP9S0I0LKlYpaDRux_r"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 16:14:24 -0000 --Sig_/6VHHzP9S0I0LKlYpaDRux_r Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable grarpamp wrote: > > ZIL is there for you to recover transactions in case of a crash. > > It is your safety net. >=20 > I always thought the ZIL was pushed out safely. So that still > no matter what the disk would be consist [1]. Like when crash > you just lose the ZIL's since the last ZIL push. Which odds > are will be just work product here. I'd expect the pool to remain consistent as long as one of the uberblocks "works", of course you'd still lose the transactions that haven't made it to the disk yet. I agree that using RAM that isn't battery-backed for the ZIL doesn't make much sense, though, and that disabling sync is more reasonable if you can live with losing transactions. > > Use the RAM for ARC, it will provide more performance. >=20 > But about reducing fragmentation without separate ZIL. > I'm admittedly over full and will need to move data to > new pool anyway. Just that with ZIL in main pool what article > I read says problem can mostly come back without separate zil. > I tend to run full till annoyed to redesign, bad habit. Disabling sync potentially reduces the fragmentation and you could additionally increase vfs.zfs.txg.synctime_ms and vfs.zfs.txg.timeout which (again potentially) reduce fragmentation further but can negatively impact "interactivity". I'm not aware of a quick way to measure fragmentation on ZFS pools, though, so I'd be interested to know how you intend to confirm that your "fragmentation tuning" actually improves things. =20 > > USB ... unreliable >=20 > I would have to test USB bus and devs for stablility. But > I do have local lifetime warranty on 32GiB devices :) > So maybe mirror 2 of them, or 2x2. > Due to crypto I only get 7-15MiB/s on spindle anyway. I'm sure a slow ZIL could throttle this even further. Fabian --Sig_/6VHHzP9S0I0LKlYpaDRux_r Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEeXkYACgkQBYqIVf93VJ2gZgCfSm0J4VqzjXB2lSp+HmDEkHr9 EWgAnAkgOc98uzyGcwkofROkugw8YKCg =YV93 -----END PGP SIGNATURE----- --Sig_/6VHHzP9S0I0LKlYpaDRux_r-- From owner-freebsd-fs@FreeBSD.ORG Fri Feb 15 21:57:17 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6D321DA4; Fri, 15 Feb 2013 21:57:17 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 318E0888; Fri, 15 Feb 2013 21:57:17 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r1FLvGo6000951; Fri, 15 Feb 2013 21:57:16 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r1FLvGQS000947; Fri, 15 Feb 2013 21:57:16 GMT (envelope-from linimon) Date: Fri, 15 Feb 2013 21:57:16 GMT Message-Id: <201302152157.r1FLvGQS000947@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/176179: [nfs] nfs client KASSERT: panic: attempt to set TDF_SBDRY recursively X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Feb 2013 21:57:17 -0000 Old Synopsis: nfs client KASSERT: panic: attempt to set TDF_SBDRY recursively New Synopsis: [nfs] nfs client KASSERT: panic: attempt to set TDF_SBDRY recursively Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Feb 15 21:57:04 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=176179 From owner-freebsd-fs@FreeBSD.ORG Sat Feb 16 09:55:13 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 09995667 for ; Sat, 16 Feb 2013 09:55:13 +0000 (UTC) (envelope-from shuriku@shurik.kiev.ua) Received: from graal.it-profi.org.ua (graal.shurik.kiev.ua [193.239.74.7]) by mx1.freebsd.org (Postfix) with ESMTP id 8FB42272 for ; Sat, 16 Feb 2013 09:55:12 +0000 (UTC) Received: from [93.183.237.30] (helo=thinkpad.it-profi.org.ua) by graal.it-profi.org.ua with esmtpsa (TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1U6eU5-000Hzg-4m for freebsd-fs@freebsd.org; Sat, 16 Feb 2013 11:55:10 +0200 Message-ID: <511F5779.5030805@shurik.kiev.ua> Date: Sat, 16 Feb 2013 11:55:05 +0200 From: Alexandr Krivulya User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130210 Thunderbird/17.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <511E1C6B.50101@shurik.kiev.ua> <511E30B3.2070302@brockmann-consult.de> In-Reply-To: <511E30B3.2070302@brockmann-consult.de> Content-Type: multipart/mixed; boundary="------------050809020106070702000508" X-SA-Exim-Connect-IP: 93.183.237.30 X-SA-Exim-Mail-From: shuriku@shurik.kiev.ua X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on graal.it-profi.org.ua X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: Re: error destroying zfs filesystem X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on graal.it-profi.org.ua) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 16 Feb 2013 09:55:13 -0000 This is a multi-part message in MIME format. --------------050809020106070702000508 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 15.02.2013 14:57, Peter Maloney пишет: > On 2013-02-15 13:44, Alexandr Kovalenko wrote: >> On Fri, Feb 15, 2013 at 11:30 AM, Alexandr Krivulya >> wrote: >>> Hello everyone! >>> >>> After upgrading my zfs-only system from 8.2 to 9.1 I have many errors >>> related to zfs in my /var/log/messages: >>> >>> Feb 15 13:12:44 gw kernel: metaslab_free_dva(): bad DVA >>> 0:264842321920Solaris: WARNING: metaslab_free_dva(): bad DVA 0:338480095232 >>> Feb 15 13:12:44 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad >>> DVA 0:277633901056Solaris: WARNING: >>> Feb 15 13:12:45 gw kernel: metaslab_free_dva(): bad DVA >>> 0:277263710208Solaris: WARNING: metaslab_free_dva(): bad DVA >>> 0:277633606144Solaris: WARNING: metaslab_free_dva(): bad DVA >>> 0:278349642240Solaris: WARNING: metaslab_free_dva(): bad DVA >>> 0:278429099008Solaris: WARNING: metaslab_free_dva(): bad DVA >>> 0:278349926400Solaris: WARNING: metaslab_free_dva(): bad DVA >>> 0:278245378560Solaris: WARNING: metaslab_free_dva(): bad DVA >>> 0:256838777344Solaris: WARNING: metaslab_free_dva(): bad DVA 0:327364684800 >>> Feb 15 13:12:45 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad >>> DVA 0:312373604864 >>> >>> root@gw:/ # zpool status -v >>> pool: zmirror >>> state: ONLINE >>> status: One or more devices has experienced an error resulting in data >>> corruption. Applications may be affected. >>> action: Restore the file in question if possible. Otherwise restore the >>> entire pool from backup. >>> see: http://illumos.org/msg/ZFS-8000-8A >>> scan: scrub repaired 0 in 1h39m with 1 errors on Thu Feb 14 17:48:53 2013 >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> zmirror ONLINE 0 0 2 >>> mirror-0 ONLINE 0 0 8 >>> gpt/disk01 ONLINE 0 0 8 >>> gpt/disk02 ONLINE 0 0 8 >>> >>> errors: Permanent errors have been detected in the following files: >>> >>> zmirror/usr:<0x0> >>> <0xc8>:<0x0> >> [dd] >>> How can I solve this issue? >> Make smartctl -t long /dev/ and then take a >> look if there any pending sectors/errors in output of smartctl -a >> /dev/ ? (for both of drives used) All tests seems to be fine: root@gw:/usr/home/support # smartctl -l selftest /dev/ada0 smartctl 6.0 2012-10-10 r3643 [FreeBSD 9.1-RELEASE amd64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 1446 - root@gw:/usr/home/support # smartctl -l selftest /dev/ada1 smartctl 6.0 2012-10-10 r3643 [FreeBSD 9.1-RELEASE amd64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 1630 smartctl also didn't show any problems, see attached file > You could also try going in /usr and "rm" or "truncate" some files > until the "Permanent errors have been detected" list is empty. And > this assumes you already ran a full scrub, which you must do to remove > the files. Now I cannot mount this filesystem to remove files: root@gw:/usr/home/support # zfs mount zmirror/usr cannot mount 'zmirror/usr': mountpoint or dataset is busy The only way I see is to backup entire pool, destroy and recreate it, and restore from a backup. --------------050809020106070702000508 Content-Type: text/plain; charset=UTF-8; name="smartctl.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smartctl.txt" cm9vdEBndzovdXNyL2hvbWUvc3VwcG9ydCAjIHNtYXJ0Y3RsIC1pQUggL2Rldi9hZGEwCnNt YXJ0Y3RsIDYuMCAyMDEyLTEwLTEwIHIzNjQzIFtGcmVlQlNEIDkuMS1SRUxFQVNFIGFtZDY0 XSAobG9jYWwgYnVpbGQpCkNvcHlyaWdodCAoQykgMjAwMi0xMiwgQnJ1Y2UgQWxsZW4sIENo cmlzdGlhbiBGcmFua2UsIHd3dy5zbWFydG1vbnRvb2xzLm9yZwoKPT09IFNUQVJUIE9GIElO Rk9STUFUSU9OIFNFQ1RJT04gPT09Ck1vZGVsIEZhbWlseTogICAgIFdlc3Rlcm4gRGlnaXRh bCBSRTQgU2VyaWFsIEFUQQpEZXZpY2UgTW9kZWw6ICAgICBXREMgV0Q1MDAzQUJZWC0wMVdF UkExClNlcmlhbCBOdW1iZXI6ICAgIFdELVdNQVlQMzI1MTM0MApMVSBXV04gRGV2aWNlIElk OiA1IDAwMTRlZSAwMDMyYWQ1M2IKRmlybXdhcmUgVmVyc2lvbjogMDEuMDFTMDIKVXNlciBD YXBhY2l0eTogICAgNTAwIDEwNyA4NjIgMDE2IGJ5dGVzIFs1MDAgR0JdClNlY3RvciBTaXpl OiAgICAgIDUxMiBieXRlcyBsb2dpY2FsL3BoeXNpY2FsClJvdGF0aW9uIFJhdGU6ICAgIDcy MDAgcnBtCkRldmljZSBpczogICAgICAgIEluIHNtYXJ0Y3RsIGRhdGFiYXNlIFtmb3IgZGV0 YWlscyB1c2U6IC1QIHNob3ddCkFUQSBWZXJzaW9uIGlzOiAgIEFUQTgtQUNTIChtaW5vciBy ZXZpc2lvbiBub3QgaW5kaWNhdGVkKQpTQVRBIFZlcnNpb24gaXM6ICBTQVRBIDMuMCwgMy4w IEdiL3MgKGN1cnJlbnQ6IDMuMCBHYi9zKQpMb2NhbCBUaW1lIGlzOiAgICBTYXQgRmViIDE2 IDExOjQ5OjMyIDIwMTMgRUVUClNNQVJUIHN1cHBvcnQgaXM6IEF2YWlsYWJsZSAtIGRldmlj ZSBoYXMgU01BUlQgY2FwYWJpbGl0eS4KU01BUlQgc3VwcG9ydCBpczogRW5hYmxlZAoKPT09 IFNUQVJUIE9GIFJFQUQgU01BUlQgREFUQSBTRUNUSU9OID09PQpTTUFSVCBvdmVyYWxsLWhl YWx0aCBzZWxmLWFzc2Vzc21lbnQgdGVzdCByZXN1bHQ6IFBBU1NFRAoKU01BUlQgQXR0cmli dXRlcyBEYXRhIFN0cnVjdHVyZSByZXZpc2lvbiBudW1iZXI6IDE2ClZlbmRvciBTcGVjaWZp YyBTTUFSVCBBdHRyaWJ1dGVzIHdpdGggVGhyZXNob2xkczoKSUQjIEFUVFJJQlVURV9OQU1F ICAgICAgICAgIEZMQUcgICAgIFZBTFVFIFdPUlNUIFRIUkVTSCBUWVBFICAgICAgVVBEQVRF RCAgV0hFTl9GQUlMRUQgUkFXX1ZBTFVFCiAgMSBSYXdfUmVhZF9FcnJvcl9SYXRlICAgICAw eDAwMmYgICAyMDAgICAyMDAgICAwNTEgICAgUHJlLWZhaWwgIEFsd2F5cyAgICAgICAtICAg ICAgIDAKICAzIFNwaW5fVXBfVGltZSAgICAgICAgICAgIDB4MDAyNyAgIDEzOSAgIDEzOSAg IDAyMSAgICBQcmUtZmFpbCAgQWx3YXlzICAgICAgIC0gICAgICAgNDAzMwogIDQgU3RhcnRf U3RvcF9Db3VudCAgICAgICAgMHgwMDMyICAgMTAwICAgMTAwICAgMDAwICAgIE9sZF9hZ2Ug ICBBbHdheXMgICAgICAgLSAgICAgICAxOAogIDUgUmVhbGxvY2F0ZWRfU2VjdG9yX0N0ICAg MHgwMDMzICAgMjAwICAgMjAwICAgMTQwICAgIFByZS1mYWlsICBBbHdheXMgICAgICAgLSAg ICAgICAwCiAgNyBTZWVrX0Vycm9yX1JhdGUgICAgICAgICAweDAwMmUgICAyMDAgICAyMDAg ICAwMDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAgICAtICAgICAgIDAKICA5IFBvd2VyX09u X0hvdXJzICAgICAgICAgIDB4MDAzMiAgIDA5OCAgIDA5OCAgIDAwMCAgICBPbGRfYWdlICAg QWx3YXlzICAgICAgIC0gICAgICAgMTQ2NQogMTAgU3Bpbl9SZXRyeV9Db3VudCAgICAgICAg MHgwMDMyICAgMTAwICAgMjUzICAgMDAwICAgIE9sZF9hZ2UgICBBbHdheXMgICAgICAgLSAg ICAgICAwCiAxMSBDYWxpYnJhdGlvbl9SZXRyeV9Db3VudCAweDAwMzIgICAxMDAgICAyNTMg ICAwMDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAgICAtICAgICAgIDAKIDEyIFBvd2VyX0N5 Y2xlX0NvdW50ICAgICAgIDB4MDAzMiAgIDEwMCAgIDEwMCAgIDAwMCAgICBPbGRfYWdlICAg QWx3YXlzICAgICAgIC0gICAgICAgMTYKMTkyIFBvd2VyLU9mZl9SZXRyYWN0X0NvdW50IDB4 MDAzMiAgIDIwMCAgIDIwMCAgIDAwMCAgICBPbGRfYWdlICAgQWx3YXlzICAgICAgIC0gICAg ICAgMTUKMTkzIExvYWRfQ3ljbGVfQ291bnQgICAgICAgIDB4MDAzMiAgIDIwMCAgIDIwMCAg IDAwMCAgICBPbGRfYWdlICAgQWx3YXlzICAgICAgIC0gICAgICAgMgoxOTQgVGVtcGVyYXR1 cmVfQ2Vsc2l1cyAgICAgMHgwMDIyICAgMTE2ICAgMDk1ICAgMDAwICAgIE9sZF9hZ2UgICBB bHdheXMgICAgICAgLSAgICAgICAyNwoxOTYgUmVhbGxvY2F0ZWRfRXZlbnRfQ291bnQgMHgw MDMyICAgMjAwICAgMjAwICAgMDAwICAgIE9sZF9hZ2UgICBBbHdheXMgICAgICAgLSAgICAg ICAwCjE5NyBDdXJyZW50X1BlbmRpbmdfU2VjdG9yICAweDAwMzIgICAyMDAgICAyMDAgICAw MDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAgICAtICAgICAgIDAKMTk4IE9mZmxpbmVfVW5j b3JyZWN0YWJsZSAgIDB4MDAzMCAgIDEwMCAgIDI1MyAgIDAwMCAgICBPbGRfYWdlICAgT2Zm bGluZSAgICAgIC0gICAgICAgMAoxOTkgVURNQV9DUkNfRXJyb3JfQ291bnQgICAgMHgwMDMy ICAgMjAwICAgMjAwICAgMDAwICAgIE9sZF9hZ2UgICBBbHdheXMgICAgICAgLSAgICAgICAw CjIwMCBNdWx0aV9ab25lX0Vycm9yX1JhdGUgICAweDAwMDggICAyMDAgICAyMDAgICAwMDAg ICAgT2xkX2FnZSAgIE9mZmxpbmUgICAgICAtICAgICAgIDAKCgoKCgpyb290QGd3Oi91c3Iv aG9tZS9zdXBwb3J0ICMgc21hcnRjdGwgLWlBSCAvZGV2L2FkYTEKc21hcnRjdGwgNi4wIDIw MTItMTAtMTAgcjM2NDMgW0ZyZWVCU0QgOS4xLVJFTEVBU0UgYW1kNjRdIChsb2NhbCBidWls ZCkKQ29weXJpZ2h0IChDKSAyMDAyLTEyLCBCcnVjZSBBbGxlbiwgQ2hyaXN0aWFuIEZyYW5r ZSwgd3d3LnNtYXJ0bW9udG9vbHMub3JnCgo9PT0gU1RBUlQgT0YgSU5GT1JNQVRJT04gU0VD VElPTiA9PT0KTW9kZWwgRmFtaWx5OiAgICAgV2VzdGVybiBEaWdpdGFsIFJFNCBTZXJpYWwg QVRBCkRldmljZSBNb2RlbDogICAgIFdEQyBXRDUwMDNBQllYLTAxV0VSQTEKU2VyaWFsIE51 bWJlcjogICAgV0QtV01BWVAzMjY1NjQ1CkxVIFdXTiBEZXZpY2UgSWQ6IDUgMDAxNGVlIDAw MzJjMmIxNApGaXJtd2FyZSBWZXJzaW9uOiAwMS4wMVMwMgpVc2VyIENhcGFjaXR5OiAgICA1 MDAgMTA3IDg2MiAwMTYgYnl0ZXMgWzUwMCBHQl0KU2VjdG9yIFNpemU6ICAgICAgNTEyIGJ5 dGVzIGxvZ2ljYWwvcGh5c2ljYWwKUm90YXRpb24gUmF0ZTogICAgNzIwMCBycG0KRGV2aWNl IGlzOiAgICAgICAgSW4gc21hcnRjdGwgZGF0YWJhc2UgW2ZvciBkZXRhaWxzIHVzZTogLVAg c2hvd10KQVRBIFZlcnNpb24gaXM6ICAgQVRBOC1BQ1MgKG1pbm9yIHJldmlzaW9uIG5vdCBp bmRpY2F0ZWQpClNBVEEgVmVyc2lvbiBpczogIFNBVEEgMy4wLCAzLjAgR2IvcyAoY3VycmVu dDogMy4wIEdiL3MpCkxvY2FsIFRpbWUgaXM6ICAgIFNhdCBGZWIgMTYgMTE6NDg6NDAgMjAx MyBFRVQKU01BUlQgc3VwcG9ydCBpczogQXZhaWxhYmxlIC0gZGV2aWNlIGhhcyBTTUFSVCBj YXBhYmlsaXR5LgpTTUFSVCBzdXBwb3J0IGlzOiBFbmFibGVkCgo9PT0gU1RBUlQgT0YgUkVB RCBTTUFSVCBEQVRBIFNFQ1RJT04gPT09ClNNQVJUIG92ZXJhbGwtaGVhbHRoIHNlbGYtYXNz ZXNzbWVudCB0ZXN0IHJlc3VsdDogUEFTU0VECgpTTUFSVCBBdHRyaWJ1dGVzIERhdGEgU3Ry dWN0dXJlIHJldmlzaW9uIG51bWJlcjogMTYKVmVuZG9yIFNwZWNpZmljIFNNQVJUIEF0dHJp YnV0ZXMgd2l0aCBUaHJlc2hvbGRzOgpJRCMgQVRUUklCVVRFX05BTUUgICAgICAgICAgRkxB RyAgICAgVkFMVUUgV09SU1QgVEhSRVNIIFRZUEUgICAgICBVUERBVEVEICBXSEVOX0ZBSUxF RCBSQVdfVkFMVUUKICAxIFJhd19SZWFkX0Vycm9yX1JhdGUgICAgIDB4MDAyZiAgIDIwMCAg IDIwMCAgIDA1MSAgICBQcmUtZmFpbCAgQWx3YXlzICAgICAgIC0gICAgICAgMAogIDMgU3Bp bl9VcF9UaW1lICAgICAgICAgICAgMHgwMDI3ICAgMTQyICAgMTQyICAgMDIxICAgIFByZS1m YWlsICBBbHdheXMgICAgICAgLSAgICAgICAzODc1CiAgNCBTdGFydF9TdG9wX0NvdW50ICAg ICAgICAweDAwMzIgICAxMDAgICAxMDAgICAwMDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAg ICAtICAgICAgIDE1CiAgNSBSZWFsbG9jYXRlZF9TZWN0b3JfQ3QgICAweDAwMzMgICAyMDAg ICAyMDAgICAxNDAgICAgUHJlLWZhaWwgIEFsd2F5cyAgICAgICAtICAgICAgIDAKICA3IFNl ZWtfRXJyb3JfUmF0ZSAgICAgICAgIDB4MDAyZSAgIDIwMCAgIDIwMCAgIDAwMCAgICBPbGRf YWdlICAgQWx3YXlzICAgICAgIC0gICAgICAgMAogIDkgUG93ZXJfT25fSG91cnMgICAgICAg ICAgMHgwMDMyICAgMDk4ICAgMDk4ICAgMDAwICAgIE9sZF9hZ2UgICBBbHdheXMgICAgICAg LSAgICAgICAxNjQ5CiAxMCBTcGluX1JldHJ5X0NvdW50ICAgICAgICAweDAwMzIgICAxMDAg ICAyNTMgICAwMDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAgICAtICAgICAgIDAKIDExIENh bGlicmF0aW9uX1JldHJ5X0NvdW50IDB4MDAzMiAgIDEwMCAgIDI1MyAgIDAwMCAgICBPbGRf YWdlICAgQWx3YXlzICAgICAgIC0gICAgICAgMAogMTIgUG93ZXJfQ3ljbGVfQ291bnQgICAg ICAgMHgwMDMyICAgMTAwICAgMTAwICAgMDAwICAgIE9sZF9hZ2UgICBBbHdheXMgICAgICAg LSAgICAgICAxMwoxOTIgUG93ZXItT2ZmX1JldHJhY3RfQ291bnQgMHgwMDMyICAgMjAwICAg MjAwICAgMDAwICAgIE9sZF9hZ2UgICBBbHdheXMgICAgICAgLSAgICAgICAxMgoxOTMgTG9h ZF9DeWNsZV9Db3VudCAgICAgICAgMHgwMDMyICAgMjAwICAgMjAwICAgMDAwICAgIE9sZF9h Z2UgICBBbHdheXMgICAgICAgLSAgICAgICAyCjE5NCBUZW1wZXJhdHVyZV9DZWxzaXVzICAg ICAweDAwMjIgICAxMTYgICAwOTUgICAwMDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAgICAt ICAgICAgIDI3CjE5NiBSZWFsbG9jYXRlZF9FdmVudF9Db3VudCAweDAwMzIgICAyMDAgICAy MDAgICAwMDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAgICAtICAgICAgIDAKMTk3IEN1cnJl bnRfUGVuZGluZ19TZWN0b3IgIDB4MDAzMiAgIDIwMCAgIDIwMCAgIDAwMCAgICBPbGRfYWdl ICAgQWx3YXlzICAgICAgIC0gICAgICAgMAoxOTggT2ZmbGluZV9VbmNvcnJlY3RhYmxlICAg MHgwMDMwICAgMTAwICAgMjUzICAgMDAwICAgIE9sZF9hZ2UgICBPZmZsaW5lICAgICAgLSAg ICAgICAwCjE5OSBVRE1BX0NSQ19FcnJvcl9Db3VudCAgICAweDAwMzIgICAyMDAgICAyMDAg ICAwMDAgICAgT2xkX2FnZSAgIEFsd2F5cyAgICAgICAtICAgICAgIDAKMjAwIE11bHRpX1pv bmVfRXJyb3JfUmF0ZSAgIDB4MDAwOCAgIDIwMCAgIDIwMCAgIDAwMCAgICBPbGRfYWdlICAg T2ZmbGluZSAgICAgIC0gICAgICAgMAogCg== --------------050809020106070702000508-- From owner-freebsd-fs@FreeBSD.ORG Sat Feb 16 10:42:01 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id A86F7212 for ; Sat, 16 Feb 2013 10:42:01 +0000 (UTC) (envelope-from shuriku@shurik.kiev.ua) Received: from graal.it-profi.org.ua (graal.shurik.kiev.ua [193.239.74.7]) by mx1.freebsd.org (Postfix) with ESMTP id 0E18337F for ; Sat, 16 Feb 2013 10:42:00 +0000 (UTC) Received: from [93.183.237.30] (helo=thinkpad.it-profi.org.ua) by graal.it-profi.org.ua with esmtpsa (TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1U6fDN-000Jf9-6r; Sat, 16 Feb 2013 12:41:59 +0200 Message-ID: <511F6270.7040008@shurik.kiev.ua> Date: Sat, 16 Feb 2013 12:41:52 +0200 From: Alexandr Krivulya User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130210 Thunderbird/17.0.2 MIME-Version: 1.0 To: Adam Nowacki References: <511E1C6B.50101@shurik.kiev.ua> <511F5FD8.4070209@platinum.linux.pl> In-Reply-To: <511F5FD8.4070209@platinum.linux.pl> Content-Type: multipart/mixed; boundary="------------030004050903030507050606" X-SA-Exim-Connect-IP: 93.183.237.30 X-SA-Exim-Mail-From: shuriku@shurik.kiev.ua X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on graal.it-profi.org.ua X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=unavailable version=3.3.2 Subject: Re: error destroying zfs filesystem X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on graal.it-profi.org.ua) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 16 Feb 2013 10:42:01 -0000 This is a multi-part message in MIME format. --------------030004050903030507050606 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 16.02.2013 12:30, Adam Nowacki пишет: > Paste output of 'zdb -uuumdC zmirror' Attached > > On 2013-02-15 12:30, Alexandr Krivulya wrote: >> Hello everyone! >> >> After upgrading my zfs-only system from 8.2 to 9.1 I have many errors >> related to zfs in my /var/log/messages: >> >> Feb 15 13:12:44 gw kernel: metaslab_free_dva(): bad DVA >> 0:264842321920Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:338480095232 >> Feb 15 13:12:44 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad >> DVA 0:277633901056Solaris: WARNING: >> Feb 15 13:12:45 gw kernel: metaslab_free_dva(): bad DVA >> 0:277263710208Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:277633606144Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278349642240Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278429099008Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278349926400Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:278245378560Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:256838777344Solaris: WARNING: metaslab_free_dva(): bad DVA >> 0:327364684800 >> Feb 15 13:12:45 gw kernel: Solaris: WARNING: metaslab_free_dva(): bad >> DVA 0:312373604864 >> >> root@gw:/ # zpool status -v >> pool: zmirror >> state: ONLINE >> status: One or more devices has experienced an error resulting in data >> corruption. Applications may be affected. >> action: Restore the file in question if possible. Otherwise restore the >> entire pool from backup. >> see: http://illumos.org/msg/ZFS-8000-8A >> scan: scrub repaired 0 in 1h39m with 1 errors on Thu Feb 14 >> 17:48:53 2013 >> config: >> >> NAME STATE READ WRITE CKSUM >> zmirror ONLINE 0 0 2 >> mirror-0 ONLINE 0 0 8 >> gpt/disk01 ONLINE 0 0 8 >> gpt/disk02 ONLINE 0 0 8 >> >> errors: Permanent errors have been detected in the following files: >> >> zmirror/usr:<0x0> >> <0xc8>:<0x0> >> >> zfs clear and zfs scrub didn't help me, so I have created a new >> filesystem zmirror/usr2 with mountpoint=/usr and copy all files manually >> from zmirror/usr to zmirror/usr2 because zfs send failed with i/o error. >> The system whith new /usr boots fine and now I try to remove broken >> zmirror/usr, but still no luck: >> >> root@gw:/ # zfs destroy zmirror/usr >> internal error: Unknown error: 122 >> Аварийное завершение(core dumped) >> >> How can I solve this issue? >> >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > --------------030004050903030507050606 Content-Type: text/plain; charset=UTF-8; name="zdb_output.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="zdb_output.txt" cm9vdEBndzovaG9tZS9zdXBwb3J0ICMgemRiIC11dXVtZEMgem1pcnJvcgpNT1MgQ29uZmln dXJhdGlvbjogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgdmVyc2lv bjogMjggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICBuYW1lOiAnem1p cnJvcicgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgIHN0YXRlOiAwICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgdHhnOiAxNzc0ODIxICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICBwb29sX2d1aWQ6IDE4ODg3MzEyNzA0 MjY5MTQ1MzAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgIAogICAgICAgIGhvc3RpZDogMTUwNTY1OTA3OSAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgCiAgICAgICAgaG9zdG5hbWU6ICdndy50ZWNobmEua2lldi51YScg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAKICAgICAgICB2ZGV2X2NoaWxkcmVuOiAxICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgIAogICAgICAgIHZkZXZfdHJlZTogICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgCiAgICAgICAgICAgIHR5cGU6ICdyb290JyAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAK ICAgICAgICAgICAgaWQ6IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAg ICAgICAgICBndWlkOiAxODg4NzMxMjcwNDI2OTE0NTMwICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAg ICAgIGNoaWxkcmVuWzBdOiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAg ICAgIHR5cGU6ICdtaXJyb3InICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAg aWQ6IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICBndWlk OiAxMTI4NDAyNjMyNDA0Mzc1MzAzOSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgIG1ldGFzbGFi X2FycmF5OiAyMyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgbWV0YXNsYWJfc2hp ZnQ6IDMxICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICBhc2hpZnQ6IDkgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgIGFzaXplOiAyNTAwNTMzOTQ0MzIg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgaXNfbG9nOiAwICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgCiAgICAgICAgICAgICAgICBjaGlsZHJlblswXTogICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAKICAgICAgICAgICAgICAgICAgICB0eXBlOiAnZGlzaycgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg IAogICAgICAgICAgICAgICAgICAgIGlkOiAwICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAg ICAgICAgICAgICAgICAgICAgZ3VpZDogMzM3NjQ1ODE5NTczMTg3NDAzOSAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAg ICAgICAgICAgICAgICBwYXRoOiAnL2Rldi9ncHQvZGlzazAxJyAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAg ICAgICAgICAgIHBoeXNfcGF0aDogJy9kZXYvZ3B0L2Rpc2swMScgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAg ICAgICAgd2hvbGVfZGlzazogMCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAg ICBEVEw6IDczICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgY2hpbGRy ZW5bMV06ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgdHlwZTog J2Rpc2snICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICBpZDogMSAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgIGd1aWQ6IDE2MDEwOTk1 MzYwNTc2MDY3MzA0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgcGF0aDogJy9kZXYvZ3B0L2Rp c2swMicgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICBwaHlzX3BhdGg6ICcvZGV2L2dwdC9k aXNrMDInICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgIAogICAgICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s6IDAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgCiAgICAgICAgICAgICAgICAgICAgRFRMOiAyMDMgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAK ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIApVYmVy YmxvY2s6ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAg bWFnaWMgPSAwMDAwMDAwMDAwYmFiMTBjICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICB2ZXJz aW9uID0gMjggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgIHR4ZyA9IDE4 MDY2MTkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgZ3VpZF9zdW0gPSAx NDExMzQ2NzA3NzA2OTA1NzI5NiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICB0aW1lc3RhbXAgPSAxMzYx MDExMTM0IFVUQyA9IFNhdCBGZWIgMTYgMTI6Mzg6NTQgMjAxMyAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgIHJvb3RicCA9IERWQVswXT08MDox MWQ0ODY0NjAwOjIwMD4gRFZBWzFdPTwwOjgzMTE0NTgwMDoyMDA+IERWQVsyXT08MDoxOTU5 YTc0MDAwOjIwMD4gW0wwIERNVSBvYmpzZXRdIGZsZXRjaGVyNCBsempiIExFIGNvbnRpZ3Vv dXMgdW5pcXVlIHRyaXBsZSBzaXplCj04MDBMLzIwMFAgYmlydGg9MTgwNjYxOUwvMTgwNjYx OVAgZmlsbD0zMTggY2tzdW09ZmRkNjdmNjQ0OjYxNjY2NDE3MzBiOjEzMDhlNzdhMzA0MjU6 Mjg2OTllOTA1MjhmY2YgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAKCgpNZXRhc2xhYnM6CiAgICAgICAgdmRldiAgICAgICAgICAw CiAgICAgICAgbWV0YXNsYWJzICAgMTE2ICAgb2Zmc2V0ICAgICAgICAgICAgICAgIHNwYWNl bWFwICAgICAgICAgIGZyZWUgICAgICAKICAgICAgICAtLS0tLS0tLS0tLS0tLS0gICAtLS0t LS0tLS0tLS0tLS0tLS0tICAgLS0tLS0tLS0tLS0tLS0tICAgLS0tLS0tLS0tLS0tLQogICAg ICAgIG1ldGFzbGFiICAgICAgMCAgIG9mZnNldCAgICAgICAgICAgIDAgICBzcGFjZW1hcCAg ICAgMjYgICBmcmVlICAgICA2MjNNCiAgICAgICAgbWV0YXNsYWIgICAgICAxICAgb2Zmc2V0 ICAgICA4MDAwMDAwMCAgIHNwYWNlbWFwICAgICA1MiAgIGZyZWUgICAgIDI1MU0KICAgICAg ICBtZXRhc2xhYiAgICAgIDIgICBvZmZzZXQgICAgMTAwMDAwMDAwICAgc3BhY2VtYXAgICAg IDU0ICAgZnJlZSAgICAgMTYyTQogICAgICAgIG1ldGFzbGFiICAgICAgMyAgIG9mZnNldCAg ICAxODAwMDAwMDAgICBzcGFjZW1hcCAgICAgNTUgICBmcmVlICAgICA4NzJNCiAgICAgICAg bWV0YXNsYWIgICAgICA0ICAgb2Zmc2V0ICAgIDIwMDAwMDAwMCAgIHNwYWNlbWFwICAgICA1 NiAgIGZyZWUgICAgIDkxN00KICAgICAgICBtZXRhc2xhYiAgICAgIDUgICBvZmZzZXQgICAg MjgwMDAwMDAwICAgc3BhY2VtYXAgICAgIDYzICAgZnJlZSAgICAgMzc0TQogICAgICAgIG1l dGFzbGFiICAgICAgNiAgIG9mZnNldCAgICAzMDAwMDAwMDAgICBzcGFjZW1hcCAgICAgNzAg ICBmcmVlICAgICA0ODJNCiAgICAgICAgbWV0YXNsYWIgICAgICA3ICAgb2Zmc2V0ICAgIDM4 MDAwMDAwMCAgIHNwYWNlbWFwICAgICA3NyAgIGZyZWUgICAgIDIwM00KICAgICAgICBtZXRh c2xhYiAgICAgIDggICBvZmZzZXQgICAgNDAwMDAwMDAwICAgc3BhY2VtYXAgICAgIDc4ICAg ZnJlZSAgICAgMzYzTQogICAgICAgIG1ldGFzbGFiICAgICAgOSAgIG9mZnNldCAgICA0ODAw MDAwMDAgICBzcGFjZW1hcCAgICAgNzkgICBmcmVlICAgICA2NDBNCiAgICAgICAgbWV0YXNs YWIgICAgIDEwICAgb2Zmc2V0ICAgIDUwMDAwMDAwMCAgIHNwYWNlbWFwICAgICA4MiAgIGZy ZWUgICAgIDg2N00KICAgICAgICBtZXRhc2xhYiAgICAgMTEgICBvZmZzZXQgICAgNTgwMDAw MDAwICAgc3BhY2VtYXAgICAgIDgzICAgZnJlZSAgICAgNTkzTQogICAgICAgIG1ldGFzbGFi ICAgICAxMiAgIG9mZnNldCAgICA2MDAwMDAwMDAgICBzcGFjZW1hcCAgICAgODQgICBmcmVl ICAgICA2ODRNCiAgICAgICAgbWV0YXNsYWIgICAgIDEzICAgb2Zmc2V0ICAgIDY4MDAwMDAw MCAgIHNwYWNlbWFwICAgICA4NSAgIGZyZWUgICAgIDYxN00KICAgICAgICBtZXRhc2xhYiAg ICAgMTQgICBvZmZzZXQgICAgNzAwMDAwMDAwICAgc3BhY2VtYXAgICAgIDg2ICAgZnJlZSAg ICAgNjA5TQogICAgICAgIG1ldGFzbGFiICAgICAxNSAgIG9mZnNldCAgICA3ODAwMDAwMDAg ICBzcGFjZW1hcCAgICAgODcgICBmcmVlICAgICA3OTJNCiAgICAgICAgbWV0YXNsYWIgICAg IDE2ICAgb2Zmc2V0ICAgIDgwMDAwMDAwMCAgIHNwYWNlbWFwICAgICA4OCAgIGZyZWUgICAg MS4wM0cKICAgICAgICBtZXRhc2xhYiAgICAgMTcgICBvZmZzZXQgICAgODgwMDAwMDAwICAg c3BhY2VtYXAgICAgIDg5ICAgZnJlZSAgICAgMTg1TQogICAgICAgIG1ldGFzbGFiICAgICAx OCAgIG9mZnNldCAgICA5MDAwMDAwMDAgICBzcGFjZW1hcCAgICAgOTEgICBmcmVlICAgICAx NzhNCiAgICAgICAgbWV0YXNsYWIgICAgIDE5ICAgb2Zmc2V0ICAgIDk4MDAwMDAwMCAgIHNw YWNlbWFwICAgICA5MiAgIGZyZWUgICAgIDE4OU0KICAgICAgICBtZXRhc2xhYiAgICAgMjAg ICBvZmZzZXQgICAgYTAwMDAwMDAwICAgc3BhY2VtYXAgICAgIDkzICAgZnJlZSAgICAgNjk5 TQogICAgICAgIG1ldGFzbGFiICAgICAyMSAgIG9mZnNldCAgICBhODAwMDAwMDAgICBzcGFj ZW1hcCAgICAgOTQgICBmcmVlICAgICA5MTVNCiAgICAgICAgbWV0YXNsYWIgICAgIDIyICAg b2Zmc2V0ICAgIGIwMDAwMDAwMCAgIHNwYWNlbWFwICAgICAyNSAgIGZyZWUgICAgIDIxMU0K ICAgICAgICBtZXRhc2xhYiAgICAgMjMgICBvZmZzZXQgICAgYjgwMDAwMDAwICAgc3BhY2Vt YXAgICAgIDUxICAgZnJlZSAgICAgMjMxTQogICAgICAgIG1ldGFzbGFiICAgICAyNCAgIG9m ZnNldCAgICBjMDAwMDAwMDAgICBzcGFjZW1hcCAgICAgNTMgICBmcmVlICAgICAyMzBNCiAg ICAgICAgbWV0YXNsYWIgICAgIDI1ICAgb2Zmc2V0ICAgIGM4MDAwMDAwMCAgIHNwYWNlbWFw ICAgIDEwMSAgIGZyZWUgICAgIDE1OU0KICAgICAgICBtZXRhc2xhYiAgICAgMjYgICBvZmZz ZXQgICAgZDAwMDAwMDAwICAgc3BhY2VtYXAgICAgMTAyICAgZnJlZSAgICAgMTY2TQogICAg ICAgIG1ldGFzbGFiICAgICAyNyAgIG9mZnNldCAgICBkODAwMDAwMDAgICBzcGFjZW1hcCAg ICAxMDMgICBmcmVlICAgICAxNzNNCiAgICAgICAgbWV0YXNsYWIgICAgIDI4ICAgb2Zmc2V0 ICAgIGUwMDAwMDAwMCAgIHNwYWNlbWFwICAgIDEwNCAgIGZyZWUgICAgIDY0NE0KICAgICAg ICBtZXRhc2xhYiAgICAgMjkgICBvZmZzZXQgICAgZTgwMDAwMDAwICAgc3BhY2VtYXAgICAg MTA1ICAgZnJlZSAgICAgMTUyTQogICAgICAgIG1ldGFzbGFiICAgICAzMCAgIG9mZnNldCAg ICBmMDAwMDAwMDAgICBzcGFjZW1hcCAgICAxMDcgICBmcmVlICAgICAxNDRNCiAgICAgICAg bWV0YXNsYWIgICAgIDMxICAgb2Zmc2V0ICAgIGY4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDEw OCAgIGZyZWUgICAgMS4xMEcKICAgICAgICBtZXRhc2xhYiAgICAgMzIgICBvZmZzZXQgICAx MDAwMDAwMDAwICAgc3BhY2VtYXAgICAgIDgxICAgZnJlZSAgICAxLjEzRwogICAgICAgIG1l dGFzbGFiICAgICAzMyAgIG9mZnNldCAgIDEwODAwMDAwMDAgICBzcGFjZW1hcCAgICAxMDkg ICBmcmVlICAgIDEuMTVHCiAgICAgICAgbWV0YXNsYWIgICAgIDM0ICAgb2Zmc2V0ICAgMTEw MDAwMDAwMCAgIHNwYWNlbWFwICAgIDExMCAgIGZyZWUgICAgIDg0Mk0KICAgICAgICBtZXRh c2xhYiAgICAgMzUgICBvZmZzZXQgICAxMTgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTExICAg ZnJlZSAgICAgNjQ5TQogICAgICAgIG1ldGFzbGFiICAgICAzNiAgIG9mZnNldCAgIDEyMDAw MDAwMDAgICBzcGFjZW1hcCAgICAxMTIgICBmcmVlICAgICA0ODVNCiAgICAgICAgbWV0YXNs YWIgICAgIDM3ICAgb2Zmc2V0ICAgMTI4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDExMyAgIGZy ZWUgICAgMS4xN0cKICAgICAgICBtZXRhc2xhYiAgICAgMzggICBvZmZzZXQgICAxMzAwMDAw MDAwICAgc3BhY2VtYXAgICAgMTE0ICAgZnJlZSAgICAgOTk1TQogICAgICAgIG1ldGFzbGFi ICAgICAzOSAgIG9mZnNldCAgIDEzODAwMDAwMDAgICBzcGFjZW1hcCAgICAxMTUgICBmcmVl ICAgICA4MzBNCiAgICAgICAgbWV0YXNsYWIgICAgIDQwICAgb2Zmc2V0ICAgMTQwMDAwMDAw MCAgIHNwYWNlbWFwICAgICA5MCAgIGZyZWUgICAgIDk1NE0KICAgICAgICBtZXRhc2xhYiAg ICAgNDEgICBvZmZzZXQgICAxNDgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTE3ICAgZnJlZSAg ICAxLjE1RwogICAgICAgIG1ldGFzbGFiICAgICA0MiAgIG9mZnNldCAgIDE1MDAwMDAwMDAg ICBzcGFjZW1hcCAgICAxMTggICBmcmVlICAgICAzMDFNCiAgICAgICAgbWV0YXNsYWIgICAg IDQzICAgb2Zmc2V0ICAgMTU4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDExOSAgIGZyZWUgICAg MS4wM0cKICAgICAgICBtZXRhc2xhYiAgICAgNDQgICBvZmZzZXQgICAxNjAwMDAwMDAwICAg c3BhY2VtYXAgICAgIDI0ICAgZnJlZSAgICAxLjA4RwogICAgICAgIG1ldGFzbGFiICAgICA0 NSAgIG9mZnNldCAgIDE2ODAwMDAwMDAgICBzcGFjZW1hcCAgICAxMjAgICBmcmVlICAgICA3 ODNNCiAgICAgICAgbWV0YXNsYWIgICAgIDQ2ICAgb2Zmc2V0ICAgMTcwMDAwMDAwMCAgIHNw YWNlbWFwICAgIDEyMSAgIGZyZWUgICAgIDM0ME0KICAgICAgICBtZXRhc2xhYiAgICAgNDcg ICBvZmZzZXQgICAxNzgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTIyICAgZnJlZSAgICAgNjQ1 TQogICAgICAgIG1ldGFzbGFiICAgICA0OCAgIG9mZnNldCAgIDE4MDAwMDAwMDAgICBzcGFj ZW1hcCAgICAxMjMgICBmcmVlICAgIDEuMTlHCiAgICAgICAgbWV0YXNsYWIgICAgIDQ5ICAg b2Zmc2V0ICAgMTg4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDEyNCAgIGZyZWUgICAgIDg4OU0K ICAgICAgICBtZXRhc2xhYiAgICAgNTAgICBvZmZzZXQgICAxOTAwMDAwMDAwICAgc3BhY2Vt YXAgICAgMTI1ICAgZnJlZSAgICAxLjIyRwogICAgICAgIG1ldGFzbGFiICAgICA1MSAgIG9m ZnNldCAgIDE5ODAwMDAwMDAgICBzcGFjZW1hcCAgICAxMjYgICBmcmVlICAgIDEuMjJHCiAg ICAgICAgbWV0YXNsYWIgICAgIDUyICAgb2Zmc2V0ICAgMWEwMDAwMDAwMCAgIHNwYWNlbWFw ICAgIDEwNiAgIGZyZWUgICAgMS4yMUcKICAgICAgICBtZXRhc2xhYiAgICAgNTMgICBvZmZz ZXQgICAxYTgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTI3ICAgZnJlZSAgICAxLjIwRwogICAg ICAgIG1ldGFzbGFiICAgICA1NCAgIG9mZnNldCAgIDFiMDAwMDAwMDAgICBzcGFjZW1hcCAg ICAgODAgICBmcmVlICAgIDEuMjdHCiAgICAgICAgbWV0YXNsYWIgICAgIDU1ICAgb2Zmc2V0 ICAgMWI4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDEyOCAgIGZyZWUgICAgMS4yOEcKICAgICAg ICBtZXRhc2xhYiAgICAgNTYgICBvZmZzZXQgICAxYzAwMDAwMDAwICAgc3BhY2VtYXAgICAg MTI5ICAgZnJlZSAgICAxLjI2RwogICAgICAgIG1ldGFzbGFiICAgICA1NyAgIG9mZnNldCAg IDFjODAwMDAwMDAgICBzcGFjZW1hcCAgICAxMzAgICBmcmVlICAgIDEuMjVHCiAgICAgICAg bWV0YXNsYWIgICAgIDU4ICAgb2Zmc2V0ICAgMWQwMDAwMDAwMCAgIHNwYWNlbWFwICAgIDEz MSAgIGZyZWUgICAgIDE0ME0KICAgICAgICBtZXRhc2xhYiAgICAgNTkgICBvZmZzZXQgICAx ZDgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTMyICAgZnJlZSAgICAxLjE3RwogICAgICAgIG1l dGFzbGFiICAgICA2MCAgIG9mZnNldCAgIDFlMDAwMDAwMDAgICBzcGFjZW1hcCAgICAxMzMg ICBmcmVlICAgIDEuMzFHCiAgICAgICAgbWV0YXNsYWIgICAgIDYxICAgb2Zmc2V0ICAgMWU4 MDAwMDAwMCAgIHNwYWNlbWFwICAgIDEzNCAgIGZyZWUgICAgIDYyNU0KICAgICAgICBtZXRh c2xhYiAgICAgNjIgICBvZmZzZXQgICAxZjAwMDAwMDAwICAgc3BhY2VtYXAgICAgMTE2ICAg ZnJlZSAgICAxLjI4RwogICAgICAgIG1ldGFzbGFiICAgICA2MyAgIG9mZnNldCAgIDFmODAw MDAwMDAgICBzcGFjZW1hcCAgICAxMzUgICBmcmVlICAgICA4ODlNCiAgICAgICAgbWV0YXNs YWIgICAgIDY0ICAgb2Zmc2V0ICAgMjAwMDAwMDAwMCAgIHNwYWNlbWFwICAgIDEzNiAgIGZy ZWUgICAgMS4yOEcKICAgICAgICBtZXRhc2xhYiAgICAgNjUgICBvZmZzZXQgICAyMDgwMDAw MDAwICAgc3BhY2VtYXAgICAgMTM3ICAgZnJlZSAgICAgNjg3TQogICAgICAgIG1ldGFzbGFi ICAgICA2NiAgIG9mZnNldCAgIDIxMDAwMDAwMDAgICBzcGFjZW1hcCAgICAxMzggICBmcmVl ICAgICA4OTZNCiAgICAgICAgbWV0YXNsYWIgICAgIDY3ICAgb2Zmc2V0ICAgMjE4MDAwMDAw MCAgIHNwYWNlbWFwICAgIDEzOSAgIGZyZWUgICAgMS4wMEcKICAgICAgICBtZXRhc2xhYiAg ICAgNjggICBvZmZzZXQgICAyMjAwMDAwMDAwICAgc3BhY2VtYXAgICAgMTQwICAgZnJlZSAg ICAxLjMyRwogICAgICAgIG1ldGFzbGFiICAgICA2OSAgIG9mZnNldCAgIDIyODAwMDAwMDAg ICBzcGFjZW1hcCAgICAxNDEgICBmcmVlICAgIDEuMjdHCiAgICAgICAgbWV0YXNsYWIgICAg IDcwICAgb2Zmc2V0ICAgMjMwMDAwMDAwMCAgIHNwYWNlbWFwICAgIDE0MiAgIGZyZWUgICAg MS4yNEcKICAgICAgICBtZXRhc2xhYiAgICAgNzEgICBvZmZzZXQgICAyMzgwMDAwMDAwICAg c3BhY2VtYXAgICAgMTQzICAgZnJlZSAgICAxLjMwRwogICAgICAgIG1ldGFzbGFiICAgICA3 MiAgIG9mZnNldCAgIDI0MDAwMDAwMDAgICBzcGFjZW1hcCAgICAxNDQgICBmcmVlICAgIDEu MzNHCiAgICAgICAgbWV0YXNsYWIgICAgIDczICAgb2Zmc2V0ICAgMjQ4MDAwMDAwMCAgIHNw YWNlbWFwICAgIDE0NSAgIGZyZWUgICAgMS4yNkcKICAgICAgICBtZXRhc2xhYiAgICAgNzQg ICBvZmZzZXQgICAyNTAwMDAwMDAwICAgc3BhY2VtYXAgICAgMTQ2ICAgZnJlZSAgICAxLjM1 RwogICAgICAgIG1ldGFzbGFiICAgICA3NSAgIG9mZnNldCAgIDI1ODAwMDAwMDAgICBzcGFj ZW1hcCAgICAxNDcgICBmcmVlICAgIDEuMTRHCiAgICAgICAgbWV0YXNsYWIgICAgIDc2ICAg b2Zmc2V0ICAgMjYwMDAwMDAwMCAgIHNwYWNlbWFwICAgIDE0OCAgIGZyZWUgICAgMS4yOUcK ICAgICAgICBtZXRhc2xhYiAgICAgNzcgICBvZmZzZXQgICAyNjgwMDAwMDAwICAgc3BhY2Vt YXAgICAgMTQ5ICAgZnJlZSAgICAxLjM4RwogICAgICAgIG1ldGFzbGFiICAgICA3OCAgIG9m ZnNldCAgIDI3MDAwMDAwMDAgICBzcGFjZW1hcCAgICAxNTAgICBmcmVlICAgIDEuMjhHCiAg ICAgICAgbWV0YXNsYWIgICAgIDc5ICAgb2Zmc2V0ICAgMjc4MDAwMDAwMCAgIHNwYWNlbWFw ICAgIDE1MSAgIGZyZWUgICAgMS4zM0cKICAgICAgICBtZXRhc2xhYiAgICAgODAgICBvZmZz ZXQgICAyODAwMDAwMDAwICAgc3BhY2VtYXAgICAgMTUyICAgZnJlZSAgICAxLjMwRwogICAg ICAgIG1ldGFzbGFiICAgICA4MSAgIG9mZnNldCAgIDI4ODAwMDAwMDAgICBzcGFjZW1hcCAg ICAxNTMgICBmcmVlICAgIDEuMzFHCiAgICAgICAgbWV0YXNsYWIgICAgIDgyICAgb2Zmc2V0 ICAgMjkwMDAwMDAwMCAgIHNwYWNlbWFwICAgIDE1NCAgIGZyZWUgICAgMS40MUcKICAgICAg ICBtZXRhc2xhYiAgICAgODMgICBvZmZzZXQgICAyOTgwMDAwMDAwICAgc3BhY2VtYXAgICAg MTU1ICAgZnJlZSAgICAxLjI5RwogICAgICAgIG1ldGFzbGFiICAgICA4NCAgIG9mZnNldCAg IDJhMDAwMDAwMDAgICBzcGFjZW1hcCAgICAxNTYgICBmcmVlICAgIDEuMjlHCiAgICAgICAg bWV0YXNsYWIgICAgIDg1ICAgb2Zmc2V0ICAgMmE4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDE1 NyAgIGZyZWUgICAgMS4zNEcKICAgICAgICBtZXRhc2xhYiAgICAgODYgICBvZmZzZXQgICAy YjAwMDAwMDAwICAgc3BhY2VtYXAgICAgMTU4ICAgZnJlZSAgICAxLjMzRwogICAgICAgIG1l dGFzbGFiICAgICA4NyAgIG9mZnNldCAgIDJiODAwMDAwMDAgICBzcGFjZW1hcCAgICAxNTkg ICBmcmVlICAgIDEuMzZHCiAgICAgICAgbWV0YXNsYWIgICAgIDg4ICAgb2Zmc2V0ICAgMmMw MDAwMDAwMCAgIHNwYWNlbWFwICAgIDE2MCAgIGZyZWUgICAgMS40MUcKICAgICAgICBtZXRh c2xhYiAgICAgODkgICBvZmZzZXQgICAyYzgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTYxICAg ZnJlZSAgICAxLjM0RwogICAgICAgIG1ldGFzbGFiICAgICA5MCAgIG9mZnNldCAgIDJkMDAw MDAwMDAgICBzcGFjZW1hcCAgICAxNjIgICBmcmVlICAgIDEuMzlHCiAgICAgICAgbWV0YXNs YWIgICAgIDkxICAgb2Zmc2V0ICAgMmQ4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDE2MyAgIGZy ZWUgICAgMS4zNUcKICAgICAgICBtZXRhc2xhYiAgICAgOTIgICBvZmZzZXQgICAyZTAwMDAw MDAwICAgc3BhY2VtYXAgICAgMTY0ICAgZnJlZSAgICAxLjM1RwogICAgICAgIG1ldGFzbGFi ICAgICA5MyAgIG9mZnNldCAgIDJlODAwMDAwMDAgICBzcGFjZW1hcCAgICAxNjUgICBmcmVl ICAgIDEuMzBHCiAgICAgICAgbWV0YXNsYWIgICAgIDk0ICAgb2Zmc2V0ICAgMmYwMDAwMDAw MCAgIHNwYWNlbWFwICAgIDE2NiAgIGZyZWUgICAgMS4zOUcKICAgICAgICBtZXRhc2xhYiAg ICAgOTUgICBvZmZzZXQgICAyZjgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTY3ICAgZnJlZSAg ICAxLjM2RwogICAgICAgIG1ldGFzbGFiICAgICA5NiAgIG9mZnNldCAgIDMwMDAwMDAwMDAg ICBzcGFjZW1hcCAgICAxNjggICBmcmVlICAgIDEuNDBHCiAgICAgICAgbWV0YXNsYWIgICAg IDk3ICAgb2Zmc2V0ICAgMzA4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDE2OSAgIGZyZWUgICAg MS4zOUcKICAgICAgICBtZXRhc2xhYiAgICAgOTggICBvZmZzZXQgICAzMTAwMDAwMDAwICAg c3BhY2VtYXAgICAgMTcwICAgZnJlZSAgICAxLjQxRwogICAgICAgIG1ldGFzbGFiICAgICA5 OSAgIG9mZnNldCAgIDMxODAwMDAwMDAgICBzcGFjZW1hcCAgICAxNzEgICBmcmVlICAgIDEu MzlHCiAgICAgICAgbWV0YXNsYWIgICAgMTAwICAgb2Zmc2V0ICAgMzIwMDAwMDAwMCAgIHNw YWNlbWFwICAgIDE3MiAgIGZyZWUgICAgMS4zNkcKICAgICAgICBtZXRhc2xhYiAgICAxMDEg ICBvZmZzZXQgICAzMjgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTczICAgZnJlZSAgICAxLjQ0 RwogICAgICAgIG1ldGFzbGFiICAgIDEwMiAgIG9mZnNldCAgIDMzMDAwMDAwMDAgICBzcGFj ZW1hcCAgICAxNzQgICBmcmVlICAgIDEuMzRHCiAgICAgICAgbWV0YXNsYWIgICAgMTAzICAg b2Zmc2V0ICAgMzM4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDE3NSAgIGZyZWUgICAgMS40MUcK ICAgICAgICBtZXRhc2xhYiAgICAxMDQgICBvZmZzZXQgICAzNDAwMDAwMDAwICAgc3BhY2Vt YXAgICAgMTc2ICAgZnJlZSAgICAxLjQ0RwogICAgICAgIG1ldGFzbGFiICAgIDEwNSAgIG9m ZnNldCAgIDM0ODAwMDAwMDAgICBzcGFjZW1hcCAgICAxNzcgICBmcmVlICAgIDEuNDZHCiAg ICAgICAgbWV0YXNsYWIgICAgMTA2ICAgb2Zmc2V0ICAgMzUwMDAwMDAwMCAgIHNwYWNlbWFw ICAgIDE3OCAgIGZyZWUgICAgMS4zN0cKICAgICAgICBtZXRhc2xhYiAgICAxMDcgICBvZmZz ZXQgICAzNTgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTc5ICAgZnJlZSAgICAxLjQxRwogICAg ICAgIG1ldGFzbGFiICAgIDEwOCAgIG9mZnNldCAgIDM2MDAwMDAwMDAgICBzcGFjZW1hcCAg ICAxODAgICBmcmVlICAgIDEuMzhHCiAgICAgICAgbWV0YXNsYWIgICAgMTA5ICAgb2Zmc2V0 ICAgMzY4MDAwMDAwMCAgIHNwYWNlbWFwICAgIDE4MSAgIGZyZWUgICAgMS40OEcKICAgICAg ICBtZXRhc2xhYiAgICAxMTAgICBvZmZzZXQgICAzNzAwMDAwMDAwICAgc3BhY2VtYXAgICAg MTgyICAgZnJlZSAgICAxLjQzRwogICAgICAgIG1ldGFzbGFiICAgIDExMSAgIG9mZnNldCAg IDM3ODAwMDAwMDAgICBzcGFjZW1hcCAgICAxODMgICBmcmVlICAgIDEuNDNHCiAgICAgICAg bWV0YXNsYWIgICAgMTEyICAgb2Zmc2V0ICAgMzgwMDAwMDAwMCAgIHNwYWNlbWFwICAgIDE4 NCAgIGZyZWUgICAgMS40N0cKICAgICAgICBtZXRhc2xhYiAgICAxMTMgICBvZmZzZXQgICAz ODgwMDAwMDAwICAgc3BhY2VtYXAgICAgMTg1ICAgZnJlZSAgICAxLjQ3RwogICAgICAgIG1l dGFzbGFiICAgIDExNCAgIG9mZnNldCAgIDM5MDAwMDAwMDAgICBzcGFjZW1hcCAgICAxODYg ICBmcmVlICAgIDEuNDJHCiAgICAgICAgbWV0YXNsYWIgICAgMTE1ICAgb2Zmc2V0ICAgMzk4 MDAwMDAwMCAgIHNwYWNlbWFwICAgIDE4NyAgIGZyZWUgICAgMS40OUcKCkRhdGFzZXQgbW9z IFtNRVRBXSwgSUQgMCwgY3JfdHhnIDQsIDU0LjRNLCAzMTggb2JqZWN0cwpEYXRhc2V0IHpt aXJyb3IvcGdzcWxkYXRhIFtaUExdLCBJRCA2MCwgY3JfdHhnIDMxMTAsIDExM00sIDE1ODAg b2JqZWN0cwpDb3VsZCBub3Qgb3BlbiB6bWlycm9yL3VzciwgZXJyb3IgMTYKRGF0YXNldCB6 bWlycm9yL2Z0cCBbWlBMXSwgSUQgOTgsIGNyX3R4ZyAxMzYyNCwgMzQuOEcsIDE3MzUgb2Jq ZWN0cwpEYXRhc2V0IHptaXJyb3IvdXNyMiBbWlBMXSwgSUQgMTk2LCBjcl90eGcgMTc4NzQ3 MSwgNC43OUcsIDI3ODkxMCBvYmplY3RzCkRhdGFzZXQgem1pcnJvci90bXAgW1pQTF0sIElE IDMwLCBjcl90eGcgNywgMjYuN00sIDgwMyBvYmplY3RzCkRhdGFzZXQgem1pcnJvci9zd2Fw IFtaVk9MXSwgSUQgNDgsIGNyX3R4ZyAxMywgMS4wMEcsIDIgb2JqZWN0cwpEYXRhc2V0IHpt aXJyb3IvdmFyIFtaUExdLCBJRCA0MiwgY3JfdHhnIDExLCAyMS43RywgMzQ0NTkgb2JqZWN0 cwpEYXRhc2V0IHptaXJyb3Ivc3F1aWRjYWNoZSBbWlBMXSwgSUQgMjE2LCBjcl90eGcgOTk1 Mzc0LCA2Ljc1RywgNDM4MzI0IG9iamVjdHMKRGF0YXNldCB6bWlycm9yL21haWxzdG9yYWdl IFtaUExdLCBJRCA2NywgY3JfdHhnIDU1MzEsIDY0LjBHLCAyMTI0MzIgb2JqZWN0cwpEYXRh c2V0IHptaXJyb3IgW1pQTF0sIElEIDE2LCBjcl90eGcgMSwgMTAwME0sIDQ1MjUgb2JqZWN0 cyAK --------------030004050903030507050606-- From owner-freebsd-fs@FreeBSD.ORG Sat Feb 16 11:23:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 942E173E for ; Sat, 16 Feb 2013 11:23:44 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) by mx1.freebsd.org (Postfix) with ESMTP id E83C46B5 for ; Sat, 16 Feb 2013 11:23:43 +0000 (UTC) Received: from server.rulingia.com (c220-239-237-213.belrs5.nsw.optusnet.com.au [220.239.237.213]) by vps.rulingia.com (8.14.5/8.14.5) with ESMTP id r1GBNXcY048607 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Sat, 16 Feb 2013 22:23:34 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.5/8.14.5) with ESMTP id r1GBNSkk077357 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sat, 16 Feb 2013 22:23:28 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.5/8.14.5/Submit) id r1GBNSiP077356 for freebsd-fs@freebsd.org; Sat, 16 Feb 2013 22:23:28 +1100 (EST) (envelope-from peter) Date: Sat, 16 Feb 2013 22:23:28 +1100 From: Peter Jeremy To: freebsd-fs@freebsd.org Subject: Calculating ZFS pool sizes at different ashift values Message-ID: <20130216112328.GA416@server.rulingia.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="OgqxwSJOaUobr8KG" Content-Disposition: inline X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 16 Feb 2013 11:23:44 -0000 --OgqxwSJOaUobr8KG Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable I am trying to work out the impact of converting my ZFS pools from ashift=3D9 to ashift=3D12. In theory, that's just a matter of getting a list of the sizes of each object, rounding each size up to 4KiB and summing the total but I'm having problems with the "each object" bit. "zdb -dd" reports each object and the 'dsize' column gives the on-disk size. But it lists the objects in each filesystem snapshot so the same object can appear multiple times in the output. And object numbers are only unique within a filesystem, though they don't appear to correlate with znode numbers for files. Further, the following appears to show that the same object can have different sizes in different snapshots: Dataset zroot/var/obj@r242865a [ZPL], ID 8205, cr_txg 10094330, 820M, 75640= objects Object lvl iblk dblk dsize lsize %full type 54536 2 16K 128K 55.5K 256K 100.00 ZFS plain file Dataset zroot/var/obj@r237444 [ZPL], ID 30, cr_txg 7939066, 573M, 64913 obj= ects 54536 1 16K 10.0K 2.50K 10.0K 100.00 ZFS plain file Does this represent a total of 55.5K on disk (the 2.5K in the second snapshot is part of the 55.5K in the first snapshot) or 58K on disk (the two objects are distinct despite having the same number)? Can anyone explain how to identify unique objects and their sizes within a zpool? --=20 Peter Jeremy --OgqxwSJOaUobr8KG Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlEfbC8ACgkQ/opHv/APuIdZhwCgwYfWzfJaLu3OfrMWP2S9TAp2 qOIAn24+DmzgNpMRWtol9JAmknMHFqW6 =mvvs -----END PGP SIGNATURE----- --OgqxwSJOaUobr8KG-- From owner-freebsd-fs@FreeBSD.ORG Sat Feb 16 12:07:59 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 12D76E26 for ; Sat, 16 Feb 2013 12:07:59 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id CBFE27F3 for ; Sat, 16 Feb 2013 12:07:58 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 1D9B047E1A; Sat, 16 Feb 2013 13:01:02 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 1B47D47E0F for ; Sat, 16 Feb 2013 13:01:00 +0100 (CET) Message-ID: <511F74F5.2050900@platinum.linux.pl> Date: Sat, 16 Feb 2013 13:00:53 +0100 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130107 Thunderbird/17.0.2 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Calculating ZFS pool sizes at different ashift values References: <20130216112328.GA416@server.rulingia.com> In-Reply-To: <20130216112328.GA416@server.rulingia.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 16 Feb 2013 12:07:59 -0000 On 2013-02-16 12:23, Peter Jeremy wrote: > I am trying to work out the impact of converting my ZFS pools from > ashift=9 to ashift=12. In theory, that's just a matter of getting a > list of the sizes of each object, rounding each size up to 4KiB and > summing the total but I'm having problems with the "each object" bit. Assuming single disk or mirror. For raidz parity and alignment has to be counted too. > "zdb -dd" reports each object and the 'dsize' column gives the on-disk > size. But it lists the objects in each filesystem snapshot so the > same object can appear multiple times in the output. And object > numbers are only unique within a filesystem, though they don't appear > to correlate with znode numbers for files. Further, the following > appears to show that the same object can have different sizes in > different snapshots: > Dataset zroot/var/obj@r242865a [ZPL], ID 8205, cr_txg 10094330, 820M, 75640 objects > Object lvl iblk dblk dsize lsize %full type > 54536 2 16K 128K 55.5K 256K 100.00 ZFS plain file > Dataset zroot/var/obj@r237444 [ZPL], ID 30, cr_txg 7939066, 573M, 64913 objects > 54536 1 16K 10.0K 2.50K 10.0K 100.00 ZFS plain file > > Does this represent a total of 55.5K on disk (the 2.5K in the second > snapshot is part of the 55.5K in the first snapshot) or 58K on disk > (the two objects are distinct despite having the same number)? > > Can anyone explain how to identify unique objects and their sizes within > a zpool? > 'zdb -vvvv zroot/var/obj@r242865a' and count unique DVAs. If both snapshots have same DVA then this particular block is shared. DVA is 'vdev number : offset : size' (in hex).