From owner-freebsd-fs@FreeBSD.ORG Sun Mar 18 07:50:06 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 2D1A2106566C for ; Sun, 18 Mar 2012 07:50:06 +0000 (UTC) (envelope-from kmtong@gmail.com) Received: from mail-we0-f182.google.com (mail-we0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id B01888FC0A for ; Sun, 18 Mar 2012 07:50:05 +0000 (UTC) Received: by wern13 with SMTP id n13so6642334wer.13 for ; Sun, 18 Mar 2012 00:50:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=TTyq9gyfOQ53JK/R8a9hayKA4LKvnF3Dg+rffvkpeZs=; b=eRU2wYcdl+i1yMROG418DOAnWe6f3ItV4SQIxMirx/HKZ+vPLXGO6iomd8TSM6Jrtw u17Cc9Ttx9yZZi1QyvYP+VYTOuEXR1Lzv3P8VBnDqxJAHd0NL8pFCWLimJwaYRecnvAL TqGTMlquYZ/Go5evApqubftUUyq570Rgv2Tw4MN3Dcc0rRfRTy+HGTKSRbzGXikaUQkn Y92Upag+VgHl5QUanoRQJWDIjRjRP2ewukeL3V2597suU5bvd6aduAmJm8dbCdauGDQe wsAUvU6MQlR0fwTZresbdiJDfYh1JtcPsVpXMY3wwPdbrnG3RPRk4ctYukoTQF3Y5rVv uBpA== MIME-Version: 1.0 Received: by 10.216.131.2 with SMTP id l2mr4809988wei.3.1332057004676; Sun, 18 Mar 2012 00:50:04 -0700 (PDT) Received: by 10.216.166.201 with HTTP; Sun, 18 Mar 2012 00:50:04 -0700 (PDT) Date: Sun, 18 Mar 2012 15:50:04 +0800 Message-ID: From: Ka Man Tong To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: ZFS kernel panic at zio_ddt_free X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Mar 2012 07:50:06 -0000 Hi, We are using FreeBSD 9.0 on amd64. Recently we have encountered many kernel panic. And now when the ZFS is mounted (and lastly re-imported using a fresh installation of FreeBSD on i386), the kernel hangs with the stack dump. ... #6 0xc7490154 at zio_ddt_free+0x54 #7 0xc7490411 at zio_execute+0xa1 #8 0xc0a571ba at taskqueue_run_locked+0xca #9 0xc0a580ac at taskqueue_thread_loop+0xbc #10 0xc09ea997 at fork_exit+0x97 #11 0xc0d32b04 at fork_trampoline+0x8 After google for a while, it seems that the function zio_ddt_free is not handled correctly. http://mail.opensolaris.org/pipermail/zfs-discuss/2012-February/050972.html We don't know whether our case is the same as the case in zfs-discuss group. And we would like to take a try on the patch first. Any help would be appreciated. Thanks. From owner-freebsd-fs@FreeBSD.ORG Sun Mar 18 21:51:14 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 56476106564A; Sun, 18 Mar 2012 21:51:14 +0000 (UTC) (envelope-from gnn@neville-neil.com) Received: from vps.hungerhost.com (vps.hungerhost.com [216.38.53.176]) by mx1.freebsd.org (Postfix) with ESMTP id 25ED88FC0A; Sun, 18 Mar 2012 21:51:13 +0000 (UTC) Received: from s224.gtokyofl6.vectant.ne.jp ([222.228.90.224] helo=punk.neville-neil.com.neville-neil.com) by vps.hungerhost.com with esmtpa (Exim 4.69) (envelope-from ) id 1S9O0O-0003WF-GT; Sun, 18 Mar 2012 17:51:12 -0400 Date: Sun, 18 Mar 2012 17:51:14 -0400 Message-ID: <86mx7dd1d9.wl%gnn@neville-neil.com> From: gnn@freebsd.org To: Gustau =?UTF-8?B?UMOpcmV6?= In-Reply-To: <4F5FCCD7.7070609@entel.upc.edu> References: <4F5C81BA.1050001@entel.upc.edu> <86ehswtmek.wl%gnn@neville-neil.com> <4F5FCCD7.7070609@entel.upc.edu> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 Emacs/23.3 (amd64-portbld-freebsd9.0) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - vps.hungerhost.com X-AntiAbuse: Original Domain - freebsd.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - neville-neil.com Cc: FreeBSD current , fs@freebsd.org Subject: Re: RFC: FUSE kernel module for the kernel... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Mar 2012 21:51:14 -0000 At Tue, 13 Mar 2012 23:40:23 +0100, Gustau Pérez wrote: >=20 > Hi, >=20 > testing ntfs-3g, after doing a bit large transfer with rsync, I=20 > found I couldn't unmount the filesystem. After some tries and before=20 > checking that no process was accessing the filesystem I tried to force=20 > the unmont. After that the system paniced instantly. >=20 > I'm running HEAD/AMD64 r232862+head-fuse-2.diff. >=20 > I have a dump of it, but it would seem that fuse is missing debug=20 > symbols (I don't know why), so the backtrace is incomplete. I compiled=20 > fuse just by doing make on $SRCDIR/sys/modules/fuse. I'll try to=20 > reproduce the panic and figure out what happens. Any help would be also=20 > appreciated on this other issue. >=20 If and when you get a panic dump please pass it along. Best, George From owner-freebsd-fs@FreeBSD.ORG Mon Mar 19 11:07:10 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BB5D7106566B for ; Mon, 19 Mar 2012 11:07:10 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A44958FC21 for ; Mon, 19 Mar 2012 11:07:10 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q2JB7ACM033564 for ; Mon, 19 Mar 2012 11:07:10 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q2JB79iQ033562 for freebsd-fs@FreeBSD.org; Mon, 19 Mar 2012 11:07:09 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 19 Mar 2012 11:07:09 GMT Message-Id: <201203191107.q2JB79iQ033562@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Mar 2012 11:07:10 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/166193 fs [ufs] [hang] FB 8.0 freeze during the kernel dump o kern/165950 fs [ffs] SU+J and fsck problem o kern/165923 fs [nfs] Writing to NFS-backed mmapped files fails if flu o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/162362 fs [snapshots] [panic] ufs with snapshot(s) panics when g o kern/162083 fs [zfs] [panic] zfs unmount -f pool o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161897 fs [zfs] [patch] zfs partition probing causing long delay o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161511 fs [unionfs] Filesystem deadlocks when using multiple uni o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159663 fs [socket] [nullfs] sockets don't work though nullfs mou o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs f kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files f sparc/123566 fs [zfs] zpool import issue: EOVERFLOW o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] [iconv] mount_msdosfs: msdosfs_iconv: Operat o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 264 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 19 12:59:19 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 479D6106566C for ; Mon, 19 Mar 2012 12:59:19 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from mo-p05-ob6.rzone.de (mo-p05-ob6.rzone.de [IPv6:2a01:238:20a:202:53f5::1]) by mx1.freebsd.org (Postfix) with ESMTP id A2D188FC08 for ; Mon, 19 Mar 2012 12:59:18 +0000 (UTC) X-RZG-AUTH: :LWIKdA2leu0bPbLmhzXgqn0MTG6qiKEwQRWfNxSw4HzYIwjsnvdDt2oX8drk23mo2DRGKXwo X-RZG-CLASS-ID: mo05 Received: from [192.168.179.39] (hmbg-5f764671.pool.mediaWays.net [95.118.70.113]) by smtp.strato.de (fruni mo33) (RZmta 28.1 DYNA|AUTH) with (DHE-RSA-AES128-SHA encrypted) ESMTPA id c059aco2JCl2wL for ; Mon, 19 Mar 2012 13:59:16 +0100 (MET) Message-ID: <4F672DA3.4040009@brockmann-consult.de> Date: Mon, 19 Mar 2012 13:59:15 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20120312 Thunderbird/11.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: 7bit Subject: Re: booting from ZFS hangs and system does not respond X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Mar 2012 12:59:19 -0000 PC-BSD probably overwrote your pmbr/bootcode. Reinstall whatever pmbr and bootcode you were previously using, eg. with gpart bootcode ... Or another option is just simplify and modernize your setup, and boot right off of zfs (in which case the zfs root should be the first zfs slice on disk... not sure if you can put it in a slice inside a slice as you have described) Am 17.03.2012 02:39, schrieb martinko: > Hi, > > Booting from ZFS hangs and system gets unresponsive. Details follow.. > > My system is an older installation where I have two 1TB disks with > several smaller partitions for testing and then a big one for FreeBSD. > The latter is comprised of 1GB UFS + swap + ZFS. UFS is used for > booting and then the whole system is on ZFS (as it used to be standard > before booting from ZFS was available). ZFS set up as mirror. > > Now all ran happily until one day PC-BSD 8.2 was installed into one of > the small partitions. No idea why but since then FreeBSD wouldn't > boot. It started displaying the prompt below but either keyboard is > ignored or the system hangs, as nothing can be done at that point. > > GEOM_LABEL: Label for provider ... is ... > Trying to mount root from zfs:tank/ROOT > > Manual root filesystem specification: > [...] > mountroot> > > Now my question is what might have possibly gone wrong and how to fix > it ? And by fixing I mean either making system run again (preferably) > or at least saving data (getting them of ZFS). > > Thanks in advance! > > M. > > PS: I forgot to mention that this is FreeBSD 7.2 installation. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Mar 21 09:52:40 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 19BFF106564A; Wed, 21 Mar 2012 09:52:40 +0000 (UTC) (envelope-from h.schmalzbauer@omnilan.de) Received: from host.omnilan.net (s1.omnilan.net [62.245.232.135]) by mx1.freebsd.org (Postfix) with ESMTP id 7D41B8FC1B; Wed, 21 Mar 2012 09:52:39 +0000 (UTC) Received: from titan.wdn.omnilan.net (titan.lo4.wdn.omnilan.net [172.21.1.150]) (authenticated bits=0) by host.omnilan.net (8.13.8/8.13.8) with ESMTP id q2L9ljt8078526 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 21 Mar 2012 10:47:46 +0100 (CET) (envelope-from h.schmalzbauer@omnilan.de) X-Authentication-Warning: smtp.dmz.omnisec.de: Host titan.lo4.wdn.omnilan.net [172.21.1.150] claimed to be titan.wdn.omnilan.net Message-ID: <4F69A3C1.7040305@omnilan.de> Date: Wed, 21 Mar 2012 10:47:45 +0100 From: Harald Schmalzbauer Organization: OmniLAN User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; de-DE; rv:1.9.2.8) Gecko/20100906 Lightning/1.0b2 Thunderbird/3.1.2 MIME-Version: 1.0 To: FreeBSD current , fs@freebsd.org X-Enigmail-Version: 1.1.2 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enig4C62FB53990859A065C0C028" Cc: Subject: Idea for GEOM and policy based file encryption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2012 09:52:40 -0000 This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig4C62FB53990859A065C0C028 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: quoted-printable Hello, I personally don't have the need to encrypt whole filesystems and if I need to transfer sensitive data I use gpg to encrypt the tarball or whatever. But, I'd like to see some single files encrypted on my systems, eg. wpasupplicant.conf, ipsec.conf aso. Since I recently secured LDAP queries via IPSec, I found this to be the absolute perfect solution. Encryption takes place only where really needed with about no overhead (compared to SSL-LDAP) So would it be imaginable, that there's something like the SPD for network sockets also for files? The idea is that in this fileSPD, there's the entry that /etc/ipsec.conf must be aes encrypted. In a fileSA, there's the info that /etc/ipsec.conf can be read by uid xyz (or only one specific kernel, identified by something new to implement) and with a special key ID. The keys are loadad as modules, optionally symmetric encrypted by passphrase.= Was such a policy based file encryption control doable with GEOM? Maybe it's easier to make use of existing tools like gpg with GEOM interaction? I don't want to reinvent any file encryption, I just need some automatic encryption (without _mandatory_ interaction) with lowest possible bypass possibilities. Thanks, -Harry --------------enig4C62FB53990859A065C0C028 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk9po8EACgkQLDqVQ9VXb8j6xgCgxVpAQljNs8vZfCe23dGVv9vz WnIAn275iF4JqId1nUfmaic2DdCyA1bI =Qdxc -----END PGP SIGNATURE----- --------------enig4C62FB53990859A065C0C028-- From owner-freebsd-fs@FreeBSD.ORG Wed Mar 21 10:09:12 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 263AF1065673; Wed, 21 Mar 2012 10:09:12 +0000 (UTC) (envelope-from victor@bsdes.net) Received: from equilibrium.bsdes.net (244.Red-217-126-240.staticIP.rima-tde.net [217.126.240.244]) by mx1.freebsd.org (Postfix) with ESMTP id BD42F8FC21; Wed, 21 Mar 2012 10:09:11 +0000 (UTC) Received: by equilibrium.bsdes.net (Postfix, from userid 1001) id 4283639844; Wed, 21 Mar 2012 11:09:05 +0100 (CET) Date: Wed, 21 Mar 2012 11:09:05 +0100 From: Victor Balada Diaz To: Harald Schmalzbauer Message-ID: <20120321100905.GN5886@equilibrium.bsdes.net> References: <4F69A3C1.7040305@omnilan.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <4F69A3C1.7040305@omnilan.de> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: FreeBSD current , fs@freebsd.org Subject: Re: Idea for GEOM and policy based file encryption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2012 10:09:12 -0000 On Wed, Mar 21, 2012 at 10:47:45AM +0100, Harald Schmalzbauer wrote: > Hello, > > I personally don't have the need to encrypt whole filesystems and if I > need to transfer sensitive data I use gpg to encrypt the tarball or > whatever. > But, I'd like to see some single files encrypted on my systems, eg. > wpasupplicant.conf, ipsec.conf aso. > Since I recently secured LDAP queries via IPSec, I found this to be the > absolute perfect solution. Encryption takes place only where really > needed with about no overhead (compared to SSL-LDAP) > So would it be imaginable, that there's something like the SPD for > network sockets also for files? > The idea is that in this fileSPD, there's the entry that /etc/ipsec.conf > must be aes encrypted. In a fileSA, there's the info that > /etc/ipsec.conf can be read by uid xyz (or only one specific kernel, > identified by something new to implement) and with a special key ID. The > keys are loadad as modules, optionally symmetric encrypted by passphrase. > > Was such a policy based file encryption control doable with GEOM? > Maybe it's easier to make use of existing tools like gpg with GEOM > interaction? > I don't want to reinvent any file encryption, I just need some automatic > encryption (without _mandatory_ interaction) with lowest possible bypass > possibilities. > > Thanks, > Hello Harald, I'm not an expert, but i guess that GEOM is not the place for that kind of encryption. GEOM have no knowledge about files or directories. That is file system specific. You would need to modify UFS, or maybe do something like CFS[1]. CFS works as an NFS server and you could modify it to only cipher the needed files. Also you could write a simple FS on FUSE, but last time i checked, our FUSE support had some problems. I hope it helps. Regards. Victor. [1]: http://www.crypto.com/software/ -- La prueba más fehaciente de que existe vida inteligente en otros planetas, es que no han intentado contactar con nosotros. From owner-freebsd-fs@FreeBSD.ORG Wed Mar 21 10:19:31 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2E8B41065679; Wed, 21 Mar 2012 10:19:31 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from mail.kirov.so-ups.ru (mail.kirov.so-ups.ru [178.74.170.1]) by mx1.freebsd.org (Postfix) with ESMTP id CF48C8FC1B; Wed, 21 Mar 2012 10:19:30 +0000 (UTC) Received: from kas30pipe.localhost (localhost.kirov.so-ups.ru [127.0.0.1]) by mail.kirov.so-ups.ru (Postfix) with SMTP id 79137B8026; Wed, 21 Mar 2012 14:19:23 +0400 (MSK) Received: from kirov.so-ups.ru (unknown [172.21.81.1]) by mail.kirov.so-ups.ru (Postfix) with ESMTP id 738C8B801F; Wed, 21 Mar 2012 14:19:23 +0400 (MSK) Received: by ns.kirov.so-ups.ru (Postfix, from userid 1010) id 57B3BB9FF8; Wed, 21 Mar 2012 14:19:23 +0400 (MSK) Received: from [127.0.0.1] (elsukov.kirov.oduur.so [10.118.3.52]) by ns.kirov.so-ups.ru (Postfix) with ESMTP id 0EA77B9FEB; Wed, 21 Mar 2012 14:19:23 +0400 (MSK) Message-ID: <4F69AB2B.5050205@FreeBSD.org> Date: Wed, 21 Mar 2012 14:19:23 +0400 From: "Andrey V. Elsukov" User-Agent: Mozilla Thunderbird 1.5 (FreeBSD/20051231) MIME-Version: 1.0 To: Harald Schmalzbauer References: <4F69A3C1.7040305@omnilan.de> In-Reply-To: <4F69A3C1.7040305@omnilan.de> X-Enigmail-Version: 1.3.5 Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 7bit X-SpamTest-Version: SMTP-Filter Version 3.0.0 [0284], KAS30/Release X-SpamTest-Info: Not protected Cc: FreeBSD current , fs@freebsd.org Subject: Re: Idea for GEOM and policy based file encryption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2012 10:19:31 -0000 On 21.03.2012 13:47, Harald Schmalzbauer wrote: > Was such a policy based file encryption control doable with GEOM? > Maybe it's easier to make use of existing tools like gpg with GEOM > interaction? > I don't want to reinvent any file encryption, I just need some automatic > encryption (without _mandatory_ interaction) with lowest possible bypass > possibilities. It sounds like not a task for GEOM. -- WBR, Andrey V. Elsukov From owner-freebsd-fs@FreeBSD.ORG Wed Mar 21 10:24:20 2012 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7E314106564A; Wed, 21 Mar 2012 10:24:20 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 524778FC0A; Wed, 21 Mar 2012 10:24:20 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.5/8.14.5) with ESMTP id q2LAOK1Y036788; Wed, 21 Mar 2012 10:24:20 GMT (envelope-from avg@freefall.freebsd.org) Received: (from avg@localhost) by freefall.freebsd.org (8.14.5/8.14.5/Submit) id q2LAOKW9036784; Wed, 21 Mar 2012 10:24:20 GMT (envelope-from avg) Date: Wed, 21 Mar 2012 10:24:20 GMT Message-Id: <201203211024.q2LAOKW9036784@freefall.freebsd.org> To: avg@FreeBSD.org, freebsd-fs@FreeBSD.org, avg@FreeBSD.org From: avg@FreeBSD.org Cc: Subject: Re: kern/166193: [dump] FB 8.0 freeze during the kernel dump X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2012 10:24:20 -0000 Old Synopsis: [ufs] [hang] FB 8.0 freeze during the kernel dump New Synopsis: [dump] FB 8.0 freeze during the kernel dump Responsible-Changed-From-To: freebsd-fs->avg Responsible-Changed-By: avg Responsible-Changed-When: Wed Mar 21 10:22:25 UTC 2012 Responsible-Changed-Why: This PR looks like a duplicate of PR 139614. http://www.freebsd.org/cgi/query-pr.cgi?pr=166193 From owner-freebsd-fs@FreeBSD.ORG Wed Mar 21 10:47:15 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 129AE1065673; Wed, 21 Mar 2012 10:47:15 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from mail.kirov.so-ups.ru (ns.kirov.so-ups.ru [178.74.170.1]) by mx1.freebsd.org (Postfix) with ESMTP id B1DB28FC0C; Wed, 21 Mar 2012 10:47:14 +0000 (UTC) Received: from kas30pipe.localhost (localhost.kirov.so-ups.ru [127.0.0.1]) by mail.kirov.so-ups.ru (Postfix) with SMTP id 787ECB8027; Wed, 21 Mar 2012 14:47:13 +0400 (MSK) Received: from kirov.so-ups.ru (unknown [172.21.81.1]) by mail.kirov.so-ups.ru (Postfix) with ESMTP id 6E392B801F; Wed, 21 Mar 2012 14:47:13 +0400 (MSK) Received: by ns.kirov.so-ups.ru (Postfix, from userid 1010) id 50D83B9FF9; Wed, 21 Mar 2012 14:47:13 +0400 (MSK) Received: from [127.0.0.1] (elsukov.kirov.oduur.so [10.118.3.52]) by ns.kirov.so-ups.ru (Postfix) with ESMTP id 1AA57B9FF0; Wed, 21 Mar 2012 14:47:13 +0400 (MSK) Message-ID: <4F69B1B0.3040005@FreeBSD.org> Date: Wed, 21 Mar 2012 14:47:12 +0400 From: "Andrey V. Elsukov" User-Agent: Mozilla Thunderbird 1.5 (FreeBSD/20051231) MIME-Version: 1.0 To: Victor Balada Diaz References: <4F69A3C1.7040305@omnilan.de> <20120321100905.GN5886@equilibrium.bsdes.net> In-Reply-To: <20120321100905.GN5886@equilibrium.bsdes.net> X-Enigmail-Version: 1.3.5 Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 7bit X-SpamTest-Version: SMTP-Filter Version 3.0.0 [0284], KAS30/Release X-SpamTest-Info: Not protected Cc: Harald Schmalzbauer , FreeBSD current , fs@freebsd.org Subject: Re: Idea for GEOM and policy based file encryption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2012 10:47:15 -0000 On 21.03.2012 14:09, Victor Balada Diaz wrote: > You would need to modify UFS, or maybe do something like CFS[1]. CFS works > as an NFS server and you could modify it to only cipher the needed files. > > Also you could write a simple FS on FUSE, but last time i checked, our > FUSE support had some problems. > Yet another link: http://www.arg0.net/encfs -- WBR, Andrey V. Elsukov From owner-freebsd-fs@FreeBSD.ORG Wed Mar 21 23:56:51 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AF1671065674 for ; Wed, 21 Mar 2012 23:56:51 +0000 (UTC) (envelope-from andy@time-domain.co.uk) Received: from mail.time-domain.co.uk (81-179-248-237.static.dsl.pipex.com [81.179.248.237]) by mx1.freebsd.org (Postfix) with ESMTP id 36E888FC08 for ; Wed, 21 Mar 2012 23:56:50 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mail.time-domain.co.uk (8.14.3/8.14.3) with ESMTP id q2LNundm005416 for ; Wed, 21 Mar 2012 23:56:49 GMT Date: Wed, 21 Mar 2012 23:56:49 +0000 (GMT) From: andy thomas X-X-Sender: andy-tds@mail.time-domain.co.uk To: freebsd-fs@freebsd.org Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: clamav-milter 0.97.1 at mail X-Virus-Status: Clean Subject: ZFS read/write performance slows with time X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2012 23:56:51 -0000 A server running 64-bit FreeBSD 8.0 boots from a SATA disk and then mounts a ZFS mirror consisting of two SAS disks plus one spare. Immediately after booting, the filesystem is fast and responsive and 'zpool iostat -v tank' reports read and write disk bandwidths of over 22 MB. But over a period of time, this performance begins to deteriorate and after 180 days of uptime this server, which is running mail, samba and webmail servers in 3 separate jails, really struggles especially the IMAP daemon. zpool iostat -v reports a maximum read bandwidth of around 2MB and a write bandwidth of 143 KB maximum. Rebooting the system restores normal performance but the cycle gradually repeats itself. I can't see anything wrong in any log and the system has 12 GB of memory and a 2 Ghz quad-core Xeon CPU so it isn't under-resourced. At boot time ZFS reports its version as being 13 - could the problem be due to a memory leak or some other issue with early versions of ZFS that have since been fixed in later FreeBSD releases? Andy From owner-freebsd-fs@FreeBSD.ORG Thu Mar 22 00:12:11 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id B9CFE106566C for ; Thu, 22 Mar 2012 00:12:11 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-gy0-f182.google.com (mail-gy0-f182.google.com [209.85.160.182]) by mx1.freebsd.org (Postfix) with ESMTP id 73BB98FC17 for ; Thu, 22 Mar 2012 00:12:11 +0000 (UTC) Received: by ghrr20 with SMTP id r20so1731213ghr.13 for ; Wed, 21 Mar 2012 17:12:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=cQQ1W39T70+4oxxgcPldJgfaPijkA7GbxZWILhHFi7U=; b=uUJAtWk/5xnVeSTTM3gnhLW4fdIWVKaIXK105uMZFLxp1VpTx+aKvfggPdCuOrzbV4 IQucrU8lDEaeqpHAilkCvtu1AASAuu9qcUqvoMuWxxc3sPsNhAHzN5gKNmQPWzjjvdXp /tInuFpnurO5n5iND5W3KP9uCqRfmIQFEb6JipH5Vh3/cupZsIB1b6UB2/mV+G29pWSk HEovYceZypAO3rpq7HFA2MemaO0yUTYlvyRenvFhOmfkZb0fvIESNluWAuB+TDuAdHoF fC+jtHNEwm1dIdgxCfg0hdn8ftNhlRKKFYV/5wjK5cKB7lk/DT8+16vtoX1RROse5VJc pUjw== MIME-Version: 1.0 Received: by 10.236.126.168 with SMTP id b28mr5790700yhi.88.1332375130821; Wed, 21 Mar 2012 17:12:10 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.147.181.4 with HTTP; Wed, 21 Mar 2012 17:12:10 -0700 (PDT) In-Reply-To: References: Date: Wed, 21 Mar 2012 17:12:10 -0700 X-Google-Sender-Auth: 55hKcW2jWGMSBdZmM-ER_0Y859Y Message-ID: From: Artem Belevich To: andy thomas Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS read/write performance slows with time X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Mar 2012 00:12:11 -0000 On Wed, Mar 21, 2012 at 4:56 PM, andy thomas wrote: > A server running 64-bit FreeBSD 8.0 boots from a SATA disk and then mounts a ... > But over a period of time, this performance begins to deteriorate and after > 180 days of uptime If it's indeed 8.0 that you are running, I would strongly recommend upgrading to 8-STABLE or 8.3 when it's released. A *lot* of things in ZFS got fixed/improved since ZFSv13 in 8.0. --Artem From owner-freebsd-fs@FreeBSD.ORG Thu Mar 22 00:18:13 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 346B81065673; Thu, 22 Mar 2012 00:18:13 +0000 (UTC) (envelope-from andy@time-domain.co.uk) Received: from mail.time-domain.co.uk (81-179-248-237.static.dsl.pipex.com [81.179.248.237]) by mx1.freebsd.org (Postfix) with ESMTP id AFFB98FC0A; Thu, 22 Mar 2012 00:18:12 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mail.time-domain.co.uk (8.14.3/8.14.3) with ESMTP id q2M0IBcG005523; Thu, 22 Mar 2012 00:18:11 GMT Date: Thu, 22 Mar 2012 00:18:11 +0000 (GMT) From: andy thomas X-X-Sender: andy-tds@mail.time-domain.co.uk To: Artem Belevich In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: clamav-milter 0.97.1 at mail X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org Subject: Re: ZFS read/write performance slows with time X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Mar 2012 00:18:13 -0000 On Wed, 21 Mar 2012, Artem Belevich wrote: > On Wed, Mar 21, 2012 at 4:56 PM, andy thomas wrote: >> A server running 64-bit FreeBSD 8.0 boots from a SATA disk and then mounts a > ... >> But over a period of time, this performance begins to deteriorate and after >> 180 days of uptime > > If it's indeed 8.0 that you are running, I would strongly recommend > upgrading to 8-STABLE or 8.3 when it's released. > A *lot* of things in ZFS got fixed/improved since ZFSv13 in 8.0. It's running 8.0-RELEASE. I've not seen these gradual deterioration problems with 8.2-RELEASE (or indeed on OpenSolaris SPARC and OpenIndiana servers). Thanks for the quick reply. Andy From owner-freebsd-fs@FreeBSD.ORG Thu Mar 22 00:34:11 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 83DCE106566B for ; Thu, 22 Mar 2012 00:34:11 +0000 (UTC) (envelope-from prvs=142816df9d=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 17D498FC15 for ; Thu, 22 Mar 2012 00:34:10 +0000 (UTC) X-Spam-Processed: mail1.multiplay.co.uk, Thu, 22 Mar 2012 00:33:19 +0000 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail1.multiplay.co.uk X-Spam-Level: X-Spam-Status: No, score=-5.0 required=6.0 tests=USER_IN_WHITELIST shortcircuit=ham autolearn=disabled version=3.2.5 Received: from r2d2 ([188.220.16.49]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50018923598.msg for ; Thu, 22 Mar 2012 00:33:18 +0000 X-MDRemoteIP: 188.220.16.49 X-Return-Path: prvs=142816df9d=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <54245FA2BA39427993410B7363178EE7@multiplay.co.uk> From: "Steven Hartland" To: "andy thomas" , References: Date: Thu, 22 Mar 2012 00:33:25 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: Subject: Re: ZFS read/write performance slows with time X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Mar 2012 00:34:11 -0000 ----- Original Message ----- From: "andy thomas" To: Sent: Wednesday, March 21, 2012 11:56 PM Subject: ZFS read/write performance slows with time >A server running 64-bit FreeBSD 8.0 boots from a SATA disk and then mounts > a ZFS mirror consisting of two SAS disks plus one spare. Immediately after > booting, the filesystem is fast and responsive and 'zpool iostat -v tank' > reports read and write disk bandwidths of over 22 MB. > > But over a period of time, this performance begins to deteriorate and > after 180 days of uptime this server, which is running mail, samba and > webmail servers in 3 separate jails, really struggles especially the IMAP > daemon. zpool iostat -v reports a maximum read bandwidth of around 2MB and > a write bandwidth of 143 KB maximum. Rebooting the system restores normal > performance but the cycle gradually repeats itself. > > I can't see anything wrong in any log and the system has 12 GB of memory > and a 2 Ghz quad-core Xeon CPU so it isn't under-resourced. At boot time > ZFS reports its version as being 13 - could the problem be due to a memory > leak or some other issue with early versions of ZFS that have since been > fixed in later FreeBSD releases? There are known issues with the code in 8.0 which could easily cause the behaviour you describe, I'd recommend upgrading to 8-STABLE or 8.3-RELEASE when its done as that brings in a large amount of fixes for ZFS including v28 support iirc. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Thu Mar 22 17:45:16 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id BB488106566B for ; Thu, 22 Mar 2012 17:45:16 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-gx0-f182.google.com (mail-gx0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id 740C88FC21 for ; Thu, 22 Mar 2012 17:45:16 +0000 (UTC) Received: by ggnk4 with SMTP id k4so2444466ggn.13 for ; Thu, 22 Mar 2012 10:45:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:content-transfer-encoding:subject:date:message-id :to:mime-version:x-mailer; bh=p9RUiiNPpFOgULgXpdwD2FBg5L2IDBPh1BTx1JsUXPU=; b=pPgry9GZOtQ+2Z4OxDy/wTjO2PXSscWewYkCXh1xop+iIUX2Qfu6MQBuiheojUc7mM Y457Rt+XML37GDBVSQDzeumhi/WNUMBhueNCDmG4mbvAqvvhNZ8TsKRcXam56GMu50fQ 11DqFPSwhr8H9/XDB2RNFSJmj2nqKReRoS0zuqbuBkuSc+YCF/Uj7uQc00acC5xxPr4q G4Rokh2dLP+5/2UXGQuwzukE2CuQLYvcogRlU7FErL+2MGXBPXZD/XSmoG8kmKsLdMIx So4YJTTz1PBiREVeUmwU9eU2/L4y1NkUUBpvj9kZaAConNPW6VmwXUOwOHhSrwh9Glrn 7gag== Received: by 10.68.217.97 with SMTP id ox1mr22081051pbc.81.1332438315248; Thu, 22 Mar 2012 10:45:15 -0700 (PDT) Received: from sexy.corp.trumpet.io ([207.86.77.58]) by mx.google.com with ESMTPS id j3sm4181106pbb.29.2012.03.22.10.45.13 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 22 Mar 2012 10:45:14 -0700 (PDT) From: Steven Schlansker Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Date: Thu, 22 Mar 2012 10:45:12 -0700 Message-Id: <80A49A2A-F258-4368-82F0-5C441AC7477A@gmail.com> To: freebsd-fs Mime-Version: 1.0 (Apple Message framework v1257) X-Mailer: Apple Mail (2.1257) Subject: ZDB looks in /dev/dsk but can't find anything X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Mar 2012 17:45:16 -0000 Hi, I've got a zpool that refuses to import. I'm following through old = mailing list threads trying to exhaust my diagnostic capabilities before = posting to the mailing list, but zdb -e seems to always look in /dev/dsk = for disks: open("/dev/dsk/aacd1p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd2p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd3p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd4p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd5p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd6p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd7p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd8p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd9p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd10p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd11p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd12p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd13p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd14p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd15p1",O_RDONLY,00) ERR#2 'No such file or = directory' open("/dev/dsk/aacd16p1",O_RDONLY,00) ERR#2 'No such file or = directory' zdb: can't open 'tank': No such file or directory Neither the label nor the geom devices refer to /dev/dsk everywhere, and = this seems to be a Solaris-ism. Is the tool itself perhaps broken? I = see a reference to the same problem here: = http://lists.freebsd.org/pipermail/freebsd-current/2011-February/022874.ht= ml but maybe the fix did not get applied. I am running FreeBSD d0028.nessops.net 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan 3 = 07:46:30 UTC 2012 = root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 Thanks, Steven From owner-freebsd-fs@FreeBSD.ORG Thu Mar 22 18:34:08 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1E0C5106566B for ; Thu, 22 Mar 2012 18:34:08 +0000 (UTC) (envelope-from stevenschlansker@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id CD4F78FC0A for ; Thu, 22 Mar 2012 18:34:07 +0000 (UTC) Received: by yhgm50 with SMTP id m50so2504919yhg.13 for ; Thu, 22 Mar 2012 11:34:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:content-transfer-encoding:subject:date:message-id :to:mime-version:x-mailer; bh=LKl3LhxEiLdmd9k4alaYgVCYAirlDobEXf26MJo+9Po=; b=SuKipuBP2KEYjqxQRQRa8aXbnO+8cKunEl11yIM8dol383tcD5TtMjhMpED5GHeggo Jt6/HurXondnTW7hVrF2W+UbnjyaWKO1dTv9KG0uQwDyd+k48ofGIV3KntFaH8IJdGfP 0moULgW0zi1lEmbyCXkU9JQ0ztIrW2ygon4qpc51V04tioWH81eeSgMpuARhl2WGNd9o U3U5ihoSbol7zewTceXQi7M8u0bPfrGTpxs+sG+SqA3N+gAz+1dGFspJrJukgvjfl3Sh j5hDBSN1nHJj/dFu6Ysx3zcldP4TvY5d8bccCqIcZ7HdVFxxTQfYNuRKYvxFfULijy7m dCig== Received: by 10.236.170.134 with SMTP id p6mr9165386yhl.81.1332441246946; Thu, 22 Mar 2012 11:34:06 -0700 (PDT) Received: from sexy.corp.trumpet.io ([207.86.77.58]) by mx.google.com with ESMTPS id k35sm7218148ani.3.2012.03.22.11.34.05 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 22 Mar 2012 11:34:06 -0700 (PDT) From: Steven Schlansker Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Date: Thu, 22 Mar 2012 11:34:04 -0700 Message-Id: <12E6E0DC-DE17-4815-9ED4-C5DC86BAD445@gmail.com> To: freebsd-fs Mime-Version: 1.0 (Apple Message framework v1257) X-Mailer: Apple Mail (2.1257) Subject: Importing spool wedges machine hard X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Mar 2012 18:34:08 -0000 Hi all, I have a backup server running FreeBSD d0028.nessops.net 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan 3 = 07:46:30 UTC 2012 = root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 There is a single ZFS pool used for storage, configured as such: pool: tank id: 13753647290422885969 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE raidz2-0 ONLINE aacd1p1 ONLINE aacd2p1 ONLINE aacd3p1 ONLINE aacd4p1 ONLINE aacd5p1 ONLINE aacd6p1 ONLINE aacd7p1 ONLINE aacd8p1 ONLINE raidz2-1 ONLINE aacd9p1 ONLINE aacd10p1 ONLINE aacd11p1 ONLINE aacd12p1 ONLINE aacd13p1 ONLINE aacd14p1 ONLINE aacd15p1 ONLINE aacd16p1 ONLINE The setup was running just fine for about a month and a half until = yesterday when the machine hung hard. No problem, reset, comes back = fine. Few hours later, it crashes again. Now any attempt to import the = pool spins the disks for about 15-20 minutes and then wedges the = machine. Before it hangs, "top" reports that the zpool command is = dancing between active (CPU0), tx->tx, and bio states. Then the machine = becomes unresponsive (both over the network and at the console) and must = be reset. I'm seeing some old diagnostics instructions from = http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg18818.html = including running zdb -e -d and zdb -e -b, and they are running right = now. But I'm hopeful someone has more concrete advice, as this is a pretty = important system to me. Thanks! Steven From owner-freebsd-fs@FreeBSD.ORG Thu Mar 22 19:18:54 2012 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 8B8A61065673; Thu, 22 Mar 2012 19:18:54 +0000 (UTC) (envelope-from gperez@entel.upc.edu) Received: from violet.upc.es (violet.upc.es [147.83.2.51]) by mx1.freebsd.org (Postfix) with ESMTP id 0F4248FC17; Thu, 22 Mar 2012 19:18:53 +0000 (UTC) Received: from ackerman2.upc.es (ackerman2.upc.es [147.83.2.244]) by violet.upc.es (8.14.1/8.13.1) with ESMTP id q2MHrW8K032530 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Thu, 22 Mar 2012 18:53:33 +0100 Received: from portgus.lan (152.Red-83-44-98.dynamicIP.rima-tde.net [83.44.98.152]) (authenticated bits=0) by ackerman2.upc.es (8.14.4/8.14.4) with ESMTP id q2MHrVZO023276 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Thu, 22 Mar 2012 18:53:31 +0100 Message-ID: <4F6B66F0.9060001@entel.upc.edu> Date: Thu, 22 Mar 2012 18:52:48 +0100 From: =?UTF-8?B?R3VzdGF1IFDDqXJleg==?= User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:10.0.3) Gecko/20120316 Thunderbird/10.0.3 MIME-Version: 1.0 To: gnn@freebsd.org References: <4F5C81BA.1050001@entel.upc.edu> <86ehswtmek.wl%gnn@neville-neil.com> <4F5FCCD7.7070609@entel.upc.edu> <86mx7dd1d9.wl%gnn@neville-neil.com> In-Reply-To: <86mx7dd1d9.wl%gnn@neville-neil.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.70 on 147.83.2.244 X-Mail-Scanned: Criba 2.0 + Clamd X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-3.0 (violet.upc.es [147.83.2.51]); Thu, 22 Mar 2012 18:53:33 +0100 (CET) Cc: FreeBSD current , fs@freebsd.org Subject: Re: RFC: FUSE kernel module for the kernel... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Mar 2012 19:18:54 -0000 On 18/03/2012 22:51, gnn@freebsd.org wrote: > At Tue, 13 Mar 2012 23:40:23 +0100, > Gustau Pérez wrote: >> Hi, >> >> testing ntfs-3g, after doing a bit large transfer with rsync, I >> found I couldn't unmount the filesystem. After some tries and before >> checking that no process was accessing the filesystem I tried to force >> the unmont. After that the system paniced instantly. >> >> I'm running HEAD/AMD64 r232862+head-fuse-2.diff. >> >> I have a dump of it, but it would seem that fuse is missing debug >> symbols (I don't know why), so the backtrace is incomplete. I compiled >> fuse just by doing make on $SRCDIR/sys/modules/fuse. I'll try to >> reproduce the panic and figure out what happens. Any help would be also >> appreciated on this other issue. >> > If and when you get a panic dump please pass it along. > > Best, > George I'm trying to reproduce it. I saw that the fuse module is not built during kernel build process. I added it to sys/modules/Makefile. That way it will be built with debug symbols. That would allow me to get a complete core. I'll try to get it a post it as soon as possible. About the setattr/getattr blocking problems with gvfs-fuse-daemon, I will also try to see what is going on. Help will be appreciated because it is quite useful in the desktop. George, please ping me when you have time, I don't to send many information to the list, it may get difficult to follow (I you think it won't, please let me know and I'll send my findings as they happen). Thanks, Gustau From owner-freebsd-fs@FreeBSD.ORG Fri Mar 23 11:49:55 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A34C0106566C for ; Fri, 23 Mar 2012 11:49:55 +0000 (UTC) (envelope-from matthew@FreeBSD.org) Received: from smtp.infracaninophile.co.uk (smtp6.infracaninophile.co.uk [IPv6:2001:8b0:151:1:3cd3:cd67:fafa:3d78]) by mx1.freebsd.org (Postfix) with ESMTP id 108C48FC08 for ; Fri, 23 Mar 2012 11:49:54 +0000 (UTC) Received: from seedling.black-earth.co.uk (seedling.black-earth.co.uk [IPv6:2001:8b0:151:1:fa1e:dfff:feda:c0bb]) (authenticated bits=0) by smtp.infracaninophile.co.uk (8.14.5/8.14.5) with ESMTP id q2NBnja4006567 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Fri, 23 Mar 2012 11:49:46 GMT (envelope-from matthew@FreeBSD.org) X-DKIM: OpenDKIM Filter v2.5.0 smtp.infracaninophile.co.uk q2NBnja4006567 Authentication-Results: smtp.infracaninophile.co.uk/q2NBnja4006567; dkim=none (no signature); dkim-adsp=none Message-ID: <4F6C6352.9090906@FreeBSD.org> Date: Fri, 23 Mar 2012 11:49:38 +0000 From: Matthew Seaman User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:11.0) Gecko/20120313 Thunderbird/11.0 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org X-Enigmail-Version: 1.4 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enigCDFA7CFE59B3A81E43B70DA7" X-Virus-Scanned: clamav-milter 0.97.3 at lucid-nonsense.infracaninophile.co.uk X-Virus-Status: Clean X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on lucid-nonsense.infracaninophile.co.uk Cc: Subject: Overriding the zpool bootfs property from the loader? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Mar 2012 11:49:55 -0000 This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enigCDFA7CFE59B3A81E43B70DA7 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Dear all, I've been playing around with using ZFS boot environments recently, and been pretty pleased with the concept in general. One thing I'd love to be able to do, but which I can't find any description of for FreeBSD, is to be able to override the bootfs property of the root zpool at an early stage in the boot process -- eg. by escaping to the loader prompt. This would facilitate easily switching between different boot environments, and be particularly useful if the default boot environment had somehow been rendered unbootab= le. Apparently this sort of functionality is possible in Solaris, by using 'boot -Z' or 'boot -L' from the 'ok' prompt: http://docs.oracle.com/cd/E19082-01/817-2271/ggpco/index.html Any clues on whether the equivalent is possible in FreeBSD and if so how would be gratefully received. Cheers, Matthew --=20 Dr Matthew J Seaman MA, D.Phil. PGP: http://www.infracaninophile.co.uk/pgpkey --------------enigCDFA7CFE59B3A81E43B70DA7 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.16 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9sY1kACgkQ8Mjk52CukIz5nwCfdFieVs4/bWa//I830sIWfR4t im8AnjNetsztzZgBWMlNpRk3M25SiypH =gVob -----END PGP SIGNATURE----- --------------enigCDFA7CFE59B3A81E43B70DA7-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 23 12:15:19 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ED95F106564A; Fri, 23 Mar 2012 12:15:19 +0000 (UTC) (envelope-from florian@wagner-flo.net) Received: from umbracor.wagner-flo.net (umbracor.wagner-flo.net [213.165.81.202]) by mx1.freebsd.org (Postfix) with ESMTP id AAB748FC18; Fri, 23 Mar 2012 12:15:19 +0000 (UTC) Received: from auedv3.syscomp.de (umbracor.wagner-flo.net [127.0.0.1]) by umbracor.wagner-flo.net (Postfix) with ESMTPSA id DE1823C058F6; Fri, 23 Mar 2012 13:15:11 +0100 (CET) Date: Fri, 23 Mar 2012 13:15:08 +0100 From: Florian Wagner To: Matthew Seaman Message-ID: <20120323131508.0272be25@auedv3.syscomp.de> In-Reply-To: <4F6C6352.9090906@FreeBSD.org> References: <4F6C6352.9090906@FreeBSD.org> X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/EPQNPmAE+40JCUlI6cQOR/2"; protocol="application/pgp-signature" Cc: freebsd-fs@FreeBSD.org Subject: Re: Overriding the zpool bootfs property from the loader? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Mar 2012 12:15:20 -0000 --Sig_/EPQNPmAE+40JCUlI6cQOR/2 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable On Fri, 23 Mar 2012 11:49:38 +0000 Matthew Seaman wrote: >=20 > Dear all, >=20 > I've been playing around with using ZFS boot environments recently, > and been pretty pleased with the concept in general. >=20 > One thing I'd love to be able to do, but which I can't find any > description of for FreeBSD, is to be able to override the bootfs > property of the root zpool at an early stage in the boot process -- > eg. by escaping to the loader prompt. This would facilitate easily > switching between different boot environments, and be particularly > useful if the default boot environment had somehow been rendered > unbootable. >=20 > Apparently this sort of functionality is possible in Solaris, by using > 'boot -Z' or 'boot -L' from the 'ok' prompt: >=20 > http://docs.oracle.com/cd/E19082-01/817-2271/ggpco/index.html >=20 > Any clues on whether the equivalent is possible in FreeBSD and if so > how would be gratefully received. I've recently discussed more or less the same on this list. The thread is called "Extending zfsboot.c to allow selecting filesystem from boot.config" and available in the mailing list archives of October, November 2011 and Januar 2012. Summary: Andriy Gapon has a bunch of changes against head in his avgbsd repository [1] which implement something like this. With his help I've backported these to stable 8. I've just recently gone over the work and put together a culminating patch, which I've tested as extensively as possible in my at-home environment. This is available as a Mercurial patch queue at [2] or directly at [3]. Regards Florian [1] http://gitorious.org/~avg/freebsd/avgbsd [2] http://bitbucket.org/wagnerflo/freebsd-stable8-patches [3] http://bitbucket.org/wagnerflo/freebsd-stable8-patches/raw/default/exte= nded-zfsboot --Sig_/EPQNPmAE+40JCUlI6cQOR/2 Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAk9saUwACgkQLvW/2gp2pPz34wCgii44G7Sxc+KUEL8stP/S1TQM AzIAniFvy+RtgR5ob21FMJtdLZ4XBXnF =QOYR -----END PGP SIGNATURE----- --Sig_/EPQNPmAE+40JCUlI6cQOR/2-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 23 13:13:02 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 6D6CC106566C for ; Fri, 23 Mar 2012 13:13:02 +0000 (UTC) (envelope-from matthew@FreeBSD.org) Received: from smtp.infracaninophile.co.uk (smtp6.infracaninophile.co.uk [IPv6:2001:8b0:151:1:3cd3:cd67:fafa:3d78]) by mx1.freebsd.org (Postfix) with ESMTP id CAF848FC17 for ; Fri, 23 Mar 2012 13:13:01 +0000 (UTC) Received: from seedling.black-earth.co.uk (seedling.black-earth.co.uk [IPv6:2001:8b0:151:1:fa1e:dfff:feda:c0bb]) (authenticated bits=0) by smtp.infracaninophile.co.uk (8.14.5/8.14.5) with ESMTP id q2NDCrvH007952 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Fri, 23 Mar 2012 13:12:53 GMT (envelope-from matthew@FreeBSD.org) X-DKIM: OpenDKIM Filter v2.5.0 smtp.infracaninophile.co.uk q2NDCrvH007952 Authentication-Results: smtp.infracaninophile.co.uk/q2NDCrvH007952; dkim=none (no signature); dkim-adsp=none Message-ID: <4F6C76CD.7050006@FreeBSD.org> Date: Fri, 23 Mar 2012 13:12:45 +0000 From: Matthew Seaman User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:11.0) Gecko/20120313 Thunderbird/11.0 MIME-Version: 1.0 To: Florian Wagner References: <4F6C6352.9090906@FreeBSD.org> <20120323131508.0272be25@auedv3.syscomp.de> In-Reply-To: <20120323131508.0272be25@auedv3.syscomp.de> X-Enigmail-Version: 1.4 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enig109575C643EA2EE1BD0DE397" X-Virus-Scanned: clamav-milter 0.97.3 at lucid-nonsense.infracaninophile.co.uk X-Virus-Status: Clean X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 autolearn=ham version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on lucid-nonsense.infracaninophile.co.uk Cc: freebsd-fs@FreeBSD.org Subject: Re: Overriding the zpool bootfs property from the loader? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Mar 2012 13:13:02 -0000 This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig109575C643EA2EE1BD0DE397 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 23/03/2012 12:15, Florian Wagner wrote: > I've recently discussed more or less the same on this list. The thread > is called "Extending zfsboot.c to allow selecting filesystem from > boot.config" and available in the mailing list archives of October, > November 2011 and Januar 2012. >=20 > Summary: Andriy Gapon has a bunch of changes against head in his avgbsd= > repository [1] which implement something like this. With his help I've > backported these to stable 8. >=20 > I've just recently gone over the work and put together a culminating > patch, which I've tested as extensively as possible in my at-home > environment. This is available as a Mercurial patch queue at [2] or > directly at [3]. Yes, this looks like pretty much what I was asking for. So, if I understand this correctly, given a root zpool named 'zroot' and a number of ZFSes with different boot environments ( zroot/ROOT/FOO, zroot/ROOT/BAR, etc.) I could interrupt the boot before the menu screen and just type at the boot: prompt -- zfs:zroot/ROOT/FOO:boot/zfsloader or zfs:zroot/ROOT/BAR:boot/zfsloader to select different environments. Is that right? I'll give your patches a go over the weekend -- I'm on stable/9 though. Cheers, Matthew --=20 Dr Matthew J Seaman MA, D.Phil. PGP: http://www.infracaninophile.co.uk/pgpkey --------------enig109575C643EA2EE1BD0DE397 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.16 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk9sdtQACgkQ8Mjk52CukIzndACfeFyYJPgtj9ZgE44R6gGS9H5c NxsAn3YALM+Wx+toNMhdFuyp06VWzZXX =1zbt -----END PGP SIGNATURE----- --------------enig109575C643EA2EE1BD0DE397-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 23 14:01:39 2012 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8B3D3106566B; Fri, 23 Mar 2012 14:01:39 +0000 (UTC) (envelope-from florian@wagner-flo.net) Received: from umbracor.wagner-flo.net (umbracor.wagner-flo.net [213.165.81.202]) by mx1.freebsd.org (Postfix) with ESMTP id 4820C8FC08; Fri, 23 Mar 2012 14:01:39 +0000 (UTC) Received: from naclador.mos32.de (ppp-188-174-88-96.dynamic.mnet-online.de [188.174.88.96]) by umbracor.wagner-flo.net (Postfix) with ESMTPSA id 11AAD3C058F6; Fri, 23 Mar 2012 15:01:38 +0100 (CET) Date: Fri, 23 Mar 2012 15:01:36 +0100 From: Florian Wagner To: Matthew Seaman Message-ID: <20120323150136.470773a9@naclador.mos32.de> In-Reply-To: <4F6C76CD.7050006@FreeBSD.org> References: <4F6C6352.9090906@FreeBSD.org> <20120323131508.0272be25@auedv3.syscomp.de> <4F6C76CD.7050006@FreeBSD.org> X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.8; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/DFJ7TaTNve7.3aGmmZY5WJU"; protocol="application/pgp-signature" Cc: freebsd-fs@FreeBSD.org Subject: Re: Overriding the zpool bootfs property from the loader? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Mar 2012 14:01:39 -0000 --Sig_/DFJ7TaTNve7.3aGmmZY5WJU Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable On Fri, 23 Mar 2012 13:12:45 +0000 Matthew Seaman wrote: > On 23/03/2012 12:15, Florian Wagner wrote: > > I've recently discussed more or less the same on this list. The > > thread is called "Extending zfsboot.c to allow selecting filesystem > > from boot.config" and available in the mailing list archives of > > October, November 2011 and Januar 2012. > >=20 > > Summary: Andriy Gapon has a bunch of changes against head in his > > avgbsd repository [1] which implement something like this. With his > > help I've backported these to stable 8. > >=20 > > I've just recently gone over the work and put together a culminating > > patch, which I've tested as extensively as possible in my at-home > > environment. This is available as a Mercurial patch queue at [2] or > > directly at [3]. >=20 > Yes, this looks like pretty much what I was asking for. So, if I > understand this correctly, given a root zpool named 'zroot' and a > number of ZFSes with different boot environments ( zroot/ROOT/FOO, > zroot/ROOT/BAR, etc.) I could interrupt the boot before the menu > screen and just type at the boot: prompt -- >=20 > zfs:zroot/ROOT/FOO:boot/zfsloader >=20 > or >=20 > zfs:zroot/ROOT/BAR:boot/zfsloader > > to select different environments. Is that right? I'll give your > patches a go over the weekend -- I'm on stable/9 though. Actually the format is :: and is optional and defaults to /boot/zfsloader. So examples would zroot:ROOT/FOO: or zroot:ROOT/BAR:/boot/zfsloader. I think this is documented incorrectly in one of the commits in the avgbsd repository. Obviously boot(8) should be updated correctly... For reference, setup on my fileserver looks link this: $ zpool get bootfs root NAME PROPERTY VALUE SOURCE root bootfs root/boot-config local $ mount | grep root/boot-config root/boot-config on /boot/config (zfs, local, nfsv4acls) $ cat /boot/config/boot.config=20 root:stable8-r232838: Regards Florian --Sig_/DFJ7TaTNve7.3aGmmZY5WJU Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (GNU/Linux) iEYEARECAAYFAk9sgkAACgkQLvW/2gp2pPw9HQCfTwzbwOLm0xP8o2v1npLAgkRv HH4AoJ2TtsgQ9GQzVQfQOxiyWbVms1+a =DC8N -----END PGP SIGNATURE----- --Sig_/DFJ7TaTNve7.3aGmmZY5WJU-- From owner-freebsd-fs@FreeBSD.ORG Fri Mar 23 16:30:51 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 63FAF1065673 for ; Fri, 23 Mar 2012 16:30:51 +0000 (UTC) (envelope-from j.freebsd-zfs@enone.net) Received: from flabnapple.net (flabnapple.net [216.129.104.99]) by mx1.freebsd.org (Postfix) with ESMTP id 4BAF88FC22 for ; Fri, 23 Mar 2012 16:30:51 +0000 (UTC) Received: from dhcp-172-31-187-54.sbo.corp.google.com (unknown [72.14.229.17]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by flabnapple.net (Postfix) with ESMTPSA id E42351CC075 for ; Fri, 23 Mar 2012 09:30:50 -0700 (PDT) From: Taylor Date: Fri, 23 Mar 2012 09:30:50 -0700 Message-Id: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Apple Message framework v1084) X-Mailer: Apple Mail (2.1084) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: ZFS extra space overhead for ashift=12 vs ashift=9 raidz2 pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Mar 2012 16:30:51 -0000 Hello, I'm bringing up a new ZFS filesystem and have noticed something strange = with respect to the overhead from ZFS. When I create a raidz2 pool with = 512-byte sectors (ashift=3D9), I have an overhead of 2.59%, but when I = create the zpool using 4k sectors (ashift=3D12), I have an overhead of = 8.06%. This amounts to a difference of 2.79TiB in my particular = application, which I'd like to avoid. :) (Assuming I haven't done anything wrong. :) ) Is the extra overhead for = 4k sector (ashift=3D12) raidz2 pools expected? Is there any way to = reduce this? (In my very limited performance testing, 4K sectors do seem to perform = slightly better and more consistently, so I'd like to use them if I can = avoid the extra overhead.) Details below. Thanks in advance for your time, -Taylor I'm running: FreeBSD host 9.0-RELEASE FreeBSD 9.0-RELEASE #0 amd64 I'm using Hitachi 4TB Deskstar 0S03364 drives, which are 4K sector = devices.=20 In order to "future proof" the raidz2 pool against possible variations = in replacement drive size, I've created a single partition on each = drive, starting at sector 2048 and using 100MB less than total available = space on the disk.=20 $ sudo gpart list da2 Geom name: da2 modified: false state: OK fwheads: 255 fwsectors: 63 last: 7814037134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: da2p1 Mediasize: 4000682172416 (3.7T) Sectorsize: 512 Stripesize: 0 Stripeoffset: 1048576 Mode: r1w1e1 rawuuid: 71ebbd49-7241-11e1-b2dd-00259055e634 rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 4000682172416 offset: 1048576 type: freebsd-zfs index: 1 end: 7813834415 start: 2048 Consumers: 1. Name: da2 Mediasize: 4000787030016 (3.7T) Sectorsize: 512 Mode: r1w1e2 Each partition gives me 4000682172416 bytes (or 3.64 TiB). I'm using 16 = drives. I create the zpool with 4K sectors as follows: $ sudo gnop create -S 4096 /dev/da2p1 $ sudo zpool create zav raidz2 da2p1.nop da3p1 da4p1 da5p1 da6p1 da7p1 = da8p1 da9p1 da10p1 da11p1 da12p1 da13p1 da14p1 da15p1 da16p1 da17p1 I confirm ashift=3D12: $ sudo zdb zav | grep ashift ashift: 12 ashift: 12 "zpool list" approximately matches the expected raw capacity of = 16*4000682172416 =3D 64010914758656 bytes (58.28 TiB).=20 $ zpool list zav NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zav 58T 1.34M 58.0T 0% 1.00x ONLINE - For raidz2, I'd expect to see 4000682172416*14 =3D 56009550413824 bytes = (50.94 TiB). However, I only get: $ zfs list zav NAME USED AVAIL REFER MOUNTPOINT zav 1.10M 46.8T 354K /zav Or using df for greater accuracy: $ df zav Filesystem 1K-blocks Used Avail Capacity Mounted on zav 50288393472 354 50288393117 0% /zav A total of 51495314915328 bytes (46.83TiB). (This is for a freshly = created zpool before any snapshots, etc. have been performed.) I measure overhead as "expected - actual / expected", which in the case = of 4k sector (ashift=3D12) raidz2 comes to 8.05%. To create a 512-byte sector (ashift=3D9) raidz2 pool, I basically just = replace "da2p1.nop" with "da2p1" when creating the zpool. I confirm = ashift=3D9. zpool raw size is the same (as much as I can tell with such = limited precision from zpool list). However, the available size = according to zfs list/df is 54560512935936 bytes (49.62 TiB), which = amounts to an overhead of 2.58%. There are some minor differences in = ALLOC and USED size listings, so I repeat them here for the 512-byte = sector raidz2 pool: $ zpool list zav NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zav 58T 228K 58.0T 0% 1.00x ONLINE - $ zfs list zav NAME USED AVAIL REFER MOUNTPOINT zav 198K 49.6T 73.0K /zav $ df zav Filesystem 1K-blocks Used Avail Capacity Mounted on zav 53281750914 73 53281750841 0% /zav I expect some overhead from ZFS and according to this blog post: http://www.cuddletech.com/blog/pivot/entry.php?id=3D1013 (via = http://mail.opensolaris.org/pipermail/zfs-discuss/2010-May/041773.html)=20= there may be a 1/64 or 1.56% overhead baked into ZFS. Interestingly = enough, when I create a pool with no raid/mirroring, I get an overhead = of 1.93% regardless of ashift=3D9 or ashift=3D12 which is quite close to = the 1/64 number. I have also tested raidz, which has similar behavior to = raidz2, however the overhead is slightly less in each case: 1) ashift=3D9 = raidz overhead is 2.33% and 2) ashift=3D12 raidz overhead is 7.04%. In order to preserve space, I've put the zdb listings for both ashift=3D9 = and ashift=3D12 radiz2 pools here: http://pastebin.com/v2xjZkNw There are also some differences in ZDB output, for example "SPA = allocated" is higher for in the 4K sector raidz2 pool, which seems = interesting, although I don't comprehend the significance of this.= From owner-freebsd-fs@FreeBSD.ORG Fri Mar 23 16:40:33 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 89484106566C for ; Fri, 23 Mar 2012 16:40:33 +0000 (UTC) (envelope-from dg@pki2.com) Received: from btw.pki2.com (btw.pki2.com [IPv6:2001:470:a:6fd::2]) by mx1.freebsd.org (Postfix) with ESMTP id 3521C8FC08 for ; Fri, 23 Mar 2012 16:40:33 +0000 (UTC) Received: from btw.pki2.com (btw.pki2.com [192.168.23.1]) by btw.pki2.com (8.14.5/8.14.5) with ESMTP id q2NGeOiA089923 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Fri, 23 Mar 2012 09:40:24 -0700 (PDT) (envelope-from dg@pki2.com) Date: Fri, 23 Mar 2012 09:40:24 -0700 (PDT) From: Dennis Glatting X-X-Sender: dennisg@btw.pki2.com To: Taylor In-Reply-To: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> Message-ID: References: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-yoursite-MailScanner-Information: Dennis Glatting X-yoursite-MailScanner-ID: q2NGeOiA089923 X-yoursite-MailScanner: Found to be clean X-MailScanner-From: dg@pki2.com Cc: freebsd-fs@freebsd.org Subject: Re: ZFS extra space overhead for ashift=12 vs ashift=9 raidz2 pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Mar 2012 16:40:33 -0000 Somewhat related: I am also using 4TB Hitachi drives but only four. Although fairly happy with these drives I have had one disk fail in the two months I have been using them. This may have been an infant failure but I am wondering if you have had any similar experiances with the drives. On Fri, 23 Mar 2012, Taylor wrote: > Hello, > > I'm bringing up a new ZFS filesystem and have noticed something strange with respect to the overhead from ZFS. When I create a raidz2 pool with 512-byte sectors (ashift=9), I have an overhead of 2.59%, but when I create the zpool using 4k sectors (ashift=12), I have an overhead of 8.06%. This amounts to a difference of 2.79TiB in my particular application, which I'd like to avoid. :) > > (Assuming I haven't done anything wrong. :) ) Is the extra overhead for 4k sector (ashift=12) raidz2 pools expected? Is there any way to reduce this? > > (In my very limited performance testing, 4K sectors do seem to perform slightly better and more consistently, so I'd like to use them if I can avoid the extra overhead.) > > Details below. > > Thanks in advance for your time, > > -Taylor > > > > I'm running: > FreeBSD host 9.0-RELEASE FreeBSD 9.0-RELEASE #0 amd64 > > I'm using Hitachi 4TB Deskstar 0S03364 drives, which are 4K sector devices. > > In order to "future proof" the raidz2 pool against possible variations in replacement drive size, I've created a single partition on each drive, starting at sector 2048 and using 100MB less than total available space on the disk. > $ sudo gpart list da2 > Geom name: da2 > modified: false > state: OK > fwheads: 255 > fwsectors: 63 > last: 7814037134 > first: 34 > entries: 128 > scheme: GPT > Providers: > 1. Name: da2p1 > Mediasize: 4000682172416 (3.7T) > Sectorsize: 512 > Stripesize: 0 > Stripeoffset: 1048576 > Mode: r1w1e1 > rawuuid: 71ebbd49-7241-11e1-b2dd-00259055e634 > rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b > label: (null) > length: 4000682172416 > offset: 1048576 > type: freebsd-zfs > index: 1 > end: 7813834415 > start: 2048 > Consumers: > 1. Name: da2 > Mediasize: 4000787030016 (3.7T) > Sectorsize: 512 > Mode: r1w1e2 > > Each partition gives me 4000682172416 bytes (or 3.64 TiB). I'm using 16 drives. I create the zpool with 4K sectors as follows: > $ sudo gnop create -S 4096 /dev/da2p1 > $ sudo zpool create zav raidz2 da2p1.nop da3p1 da4p1 da5p1 da6p1 da7p1 da8p1 da9p1 da10p1 da11p1 da12p1 da13p1 da14p1 da15p1 da16p1 da17p1 > > I confirm ashift=12: > $ sudo zdb zav | grep ashift > ashift: 12 > ashift: 12 > > "zpool list" approximately matches the expected raw capacity of 16*4000682172416 = 64010914758656 bytes (58.28 TiB). > $ zpool list zav > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zav 58T 1.34M 58.0T 0% 1.00x ONLINE - > > For raidz2, I'd expect to see 4000682172416*14 = 56009550413824 bytes (50.94 TiB). However, I only get: > $ zfs list zav > NAME USED AVAIL REFER MOUNTPOINT > zav 1.10M 46.8T 354K /zav > > Or using df for greater accuracy: > $ df zav > Filesystem 1K-blocks Used Avail Capacity Mounted on > zav 50288393472 354 50288393117 0% /zav > > A total of 51495314915328 bytes (46.83TiB). (This is for a freshly created zpool before any snapshots, etc. have been performed.) > > I measure overhead as "expected - actual / expected", which in the case of 4k sector (ashift=12) raidz2 comes to 8.05%. > > To create a 512-byte sector (ashift=9) raidz2 pool, I basically just replace "da2p1.nop" with "da2p1" when creating the zpool. I confirm ashift=9. zpool raw size is the same (as much as I can tell with such limited precision from zpool list). However, the available size according to zfs list/df is 54560512935936 bytes (49.62 TiB), which amounts to an overhead of 2.58%. There are some minor differences in ALLOC and USED size listings, so I repeat them here for the 512-byte sector raidz2 pool: > $ zpool list zav > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zav 58T 228K 58.0T 0% 1.00x ONLINE - > $ zfs list zav > NAME USED AVAIL REFER MOUNTPOINT > zav 198K 49.6T 73.0K /zav > $ df zav > Filesystem 1K-blocks Used Avail Capacity Mounted on > zav 53281750914 73 53281750841 0% /zav > > I expect some overhead from ZFS and according to this blog post: > http://www.cuddletech.com/blog/pivot/entry.php?id=1013 > (via http://mail.opensolaris.org/pipermail/zfs-discuss/2010-May/041773.html) > there may be a 1/64 or 1.56% overhead baked into ZFS. Interestingly enough, when I create a pool with no raid/mirroring, I get an overhead of 1.93% regardless of ashift=9 or ashift=12 which is quite close to the 1/64 number. I have also tested raidz, which has similar behavior to raidz2, however the overhead is slightly less in each case: 1) ashift=9 raidz overhead is 2.33% and 2) ashift=12 raidz overhead is 7.04%. > > In order to preserve space, I've put the zdb listings for both ashift=9 and ashift=12 radiz2 pools here: > http://pastebin.com/v2xjZkNw > > There are also some differences in ZDB output, for example "SPA allocated" is higher for in the 4K sector raidz2 pool, which seems interesting, although I don't comprehend the significance of this._______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Sat Mar 24 13:11:12 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A2406106566C; Sat, 24 Mar 2012 13:11:12 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id 58EDB8FC08; Sat, 24 Mar 2012 13:10:49 +0000 (UTC) Received: by iahk25 with SMTP id k25so7820645iah.13 for ; Sat, 24 Mar 2012 06:10:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=4xXnvyBz8INbM0JdfAlAE+t2vFFMYAXb+9J3xmfkEM4=; b=ONwwUeS6iwlvOncZmLsljwyMwKgCZHjJR4QxGZRHZkOWp3CXwQ2dB7nr1JA2Glzy/l auSeauuizmoS2Zd5qXz1MBFNP+8Ai7azIiVlWMME7/dJFeEXF28V/uwbXISXOzYcLreV LGdYK9inFoUz4g7xYFxi5Hn87NzhDbpdTp/H8sTDcJlncNwz+JtNCCV8nXIFs+xltRsW fXv5GHzYmIDnAQp4b0m1vNz+DpdksGfnvS5fFUJrTKeXXOJXM3IYN88kgwjAAtP/mTTB Q9ecTyxZWWbZiZsTy1xFA3lP6rRU6HWoKsKc4A8g7ieJr43wx43FiBgditeA25qqJAu7 RwPw== MIME-Version: 1.0 Received: by 10.50.189.196 with SMTP id gk4mr1446329igc.63.1332594649087; Sat, 24 Mar 2012 06:10:49 -0700 (PDT) Received: by 10.50.91.134 with HTTP; Sat, 24 Mar 2012 06:10:49 -0700 (PDT) In-Reply-To: <20120323150136.470773a9@naclador.mos32.de> References: <4F6C6352.9090906@FreeBSD.org> <20120323131508.0272be25@auedv3.syscomp.de> <4F6C76CD.7050006@FreeBSD.org> <20120323150136.470773a9@naclador.mos32.de> Date: Sat, 24 Mar 2012 13:10:49 +0000 Message-ID: From: krad To: Florian Wagner Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org, Matthew Seaman Subject: Re: Overriding the zpool bootfs property from the loader? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Mar 2012 13:11:12 -0000 On 23 March 2012 14:01, Florian Wagner wrote: > On Fri, 23 Mar 2012 13:12:45 +0000 > Matthew Seaman wrote: > >> On 23/03/2012 12:15, Florian Wagner wrote: >> > I've recently discussed more or less the same on this list. The >> > thread is called "Extending zfsboot.c to allow selecting filesystem >> > from boot.config" and available in the mailing list archives of >> > October, November 2011 and Januar 2012. >> > >> > Summary: Andriy Gapon has a bunch of changes against head in his >> > avgbsd repository [1] which implement something like this. With his >> > help I've backported these to stable 8. >> > >> > I've just recently gone over the work and put together a culminating >> > patch, which I've tested as extensively as possible in my at-home >> > environment. This is available as a Mercurial patch queue at [2] or >> > directly at [3]. >> >> Yes, this looks like pretty much what I was asking for. =A0So, if I >> understand this correctly, given a root zpool named 'zroot' and a >> number of ZFSes with different boot environments ( zroot/ROOT/FOO, >> zroot/ROOT/BAR, etc.) I could interrupt the boot before the menu >> screen and just type at the boot: prompt -- >> >> =A0 =A0zfs:zroot/ROOT/FOO:boot/zfsloader >> >> or >> >> =A0 =A0zfs:zroot/ROOT/BAR:boot/zfsloader >> >> to select different environments. =A0Is that right? =A0I'll give your >> patches a go over the weekend -- I'm on stable/9 though. > > Actually the format is :: and is optional > and defaults to /boot/zfsloader. So examples would zroot:ROOT/FOO: or > zroot:ROOT/BAR:/boot/zfsloader. > > I think this is documented incorrectly in one of the commits in the > avgbsd repository. Obviously boot(8) should be updated correctly... > > For reference, setup on my fileserver looks link this: > > =A0$ zpool get bootfs root > =A0NAME =A0PROPERTY =A0VALUE =A0 =A0 =A0 =A0 =A0 =A0 SOURCE > =A0root =A0bootfs =A0 =A0root/boot-config =A0local > =A0$ mount | grep root/boot-config > =A0root/boot-config on /boot/config (zfs, local, nfsv4acls) > =A0$ cat /boot/config/boot.config > =A0root:stable8-r232838: > > > Regards > Florian This is really good and i have been wanting something like this for ages. Would it be possible to configure baestie to utilize this or is that to late in the boot process? Any idea of time scales to commit to head/9-stable? From owner-freebsd-fs@FreeBSD.ORG Sat Mar 24 16:42:32 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 5BB7B106566B for ; Sat, 24 Mar 2012 16:42:32 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 13C7D8FC0A for ; Sat, 24 Mar 2012 16:42:31 +0000 (UTC) Received: from outgoing.leidinger.net (p5796C545.dip.t-dialin.net [87.150.197.69]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id 362B8844009; Sat, 24 Mar 2012 17:42:19 +0100 (CET) Received: from unknown (IO.Leidinger.net [192.168.1.12]) by outgoing.leidinger.net (Postfix) with ESMTPS id 83922271D; Sat, 24 Mar 2012 17:42:16 +0100 (CET) Date: Sat, 24 Mar 2012 17:42:18 +0100 From: Alexander Leidinger To: Taylor Message-ID: <20120324174218.00005f63@unknown> In-Reply-To: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> References: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> X-Mailer: Claws Mail 3.7.10cvs42 (GTK+ 2.16.6; i586-pc-mingw32msvc) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: 362B8844009.AE32E X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-1.024, required 6, autolearn=disabled, ALL_TRUSTED -1.00, AWL -0.01, T_RP_MATCHES_RCVD -0.01) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1333212139.77791@Bt4WS12UI8sOtRK9tuUspA X-EBL-Spam-Status: No Cc: freebsd-fs@freebsd.org Subject: Re: ZFS extra space overhead for ashift=12 vs ashift=9 raidz2 pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Mar 2012 16:42:32 -0000 On Fri, 23 Mar 2012 09:30:50 -0700 Taylor wrote: > I'm bringing up a new ZFS filesystem and have noticed something > strange with respect to the overhead from ZFS. When I create a raidz2 > pool with 512-byte sectors (ashift=9), I have an overhead of 2.59%, > but when I create the zpool using 4k sectors (ashift=12), I have an > overhead of 8.06%. This amounts to a difference of 2.79TiB in my > particular application, which I'd like to avoid. :) > > (Assuming I haven't done anything wrong. :) ) Is the extra overhead > for 4k sector (ashift=12) raidz2 pools expected? Is there any way to > reduce this? This depends upon the data you write. If your data is always a multiple of 4k, you will have probably less overhead (there is probably still overhead from ZFS metadata). If your data is always only a multiple of 512 byte, you would have much less overhead on a ashift=9 FS than on a ashift=12 FS. If the size of your data is random, and always less than 4k, you have more overhead than if the size of your data is random and always several GB big. Bye, Alexander. -- http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Sat Mar 24 18:38:57 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7D7DF106564A for ; Sat, 24 Mar 2012 18:38:57 +0000 (UTC) (envelope-from j.freebsd-zfs@enone.net) Received: from flabnapple.net (flabnapple.net [216.129.104.99]) by mx1.freebsd.org (Postfix) with ESMTP id 64E7F8FC08 for ; Sat, 24 Mar 2012 18:38:57 +0000 (UTC) Received: from [10.0.1.6] (c-98-207-6-192.hsd1.ca.comcast.net [98.207.6.192]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by flabnapple.net (Postfix) with ESMTPSA id E4ACA1CC056; Sat, 24 Mar 2012 11:38:50 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii From: Taylor In-Reply-To: Date: Sat, 24 Mar 2012 11:38:50 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> To: Dennis Glatting X-Mailer: Apple Mail (2.1084) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS extra space overhead for ashift=12 vs ashift=9 raidz2 pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Mar 2012 18:38:57 -0000 Dennis, This is a bit off topic from my original question and I'm hoping not to = distract from it too much, but to briefly answer your question: My experience with 4TB Hitachi drives is limited; I've only had these = drives for about a week. One of the drives exhibited ICRC errors, which in theory could be just a cabling = issue, but I couldn't reproduce the problem with the same cable/slot and different drive, so I ended up = RMAing the ICRC drive just in case. However, I have had good luck with Hitachi 3TB drives over the = past year, one Hitachi 4TB drive over the last month and have not encountered any other = problems with this batch of 4TB drives so far. Cheers, -Taylor On Mar 23, 2012, at 9:40 AM, Dennis Glatting wrote: >=20 > Somewhat related: >=20 > I am also using 4TB Hitachi drives but only four. Although fairly = happy with these drives I have had one disk fail in the two months I = have been using them. This may have been an infant failure but I am = wondering if you have had any similar experiances with the drives. >=20 >=20 >=20 > On Fri, 23 Mar 2012, Taylor wrote: >=20 >> Hello, >>=20 >> I'm bringing up a new ZFS filesystem and have noticed something = strange with respect to the overhead from ZFS. When I create a raidz2 = pool with 512-byte sectors (ashift=3D9), I have an overhead of 2.59%, = but when I create the zpool using 4k sectors (ashift=3D12), I have an = overhead of 8.06%. This amounts to a difference of 2.79TiB in my = particular application, which I'd like to avoid. :) >>=20 >> (Assuming I haven't done anything wrong. :) ) Is the extra overhead = for 4k sector (ashift=3D12) raidz2 pools expected? Is there any way to = reduce this? >>=20 >> (In my very limited performance testing, 4K sectors do seem to = perform slightly better and more consistently, so I'd like to use them = if I can avoid the extra overhead.) >>=20 >> Details below. >>=20 >> Thanks in advance for your time, >>=20 >> -Taylor >>=20 >>=20 >>=20 >> I'm running: >> FreeBSD host 9.0-RELEASE FreeBSD 9.0-RELEASE #0 amd64 >>=20 >> I'm using Hitachi 4TB Deskstar 0S03364 drives, which are 4K sector = devices. >>=20 >> In order to "future proof" the raidz2 pool against possible = variations in replacement drive size, I've created a single partition on = each drive, starting at sector 2048 and using 100MB less than total = available space on the disk. >> $ sudo gpart list da2 >> Geom name: da2 >> modified: false >> state: OK >> fwheads: 255 >> fwsectors: 63 >> last: 7814037134 >> first: 34 >> entries: 128 >> scheme: GPT >> Providers: >> 1. Name: da2p1 >> Mediasize: 4000682172416 (3.7T) >> Sectorsize: 512 >> Stripesize: 0 >> Stripeoffset: 1048576 >> Mode: r1w1e1 >> rawuuid: 71ebbd49-7241-11e1-b2dd-00259055e634 >> rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b >> label: (null) >> length: 4000682172416 >> offset: 1048576 >> type: freebsd-zfs >> index: 1 >> end: 7813834415 >> start: 2048 >> Consumers: >> 1. Name: da2 >> Mediasize: 4000787030016 (3.7T) >> Sectorsize: 512 >> Mode: r1w1e2 >>=20 >> Each partition gives me 4000682172416 bytes (or 3.64 TiB). I'm using = 16 drives. I create the zpool with 4K sectors as follows: >> $ sudo gnop create -S 4096 /dev/da2p1 >> $ sudo zpool create zav raidz2 da2p1.nop da3p1 da4p1 da5p1 da6p1 = da7p1 da8p1 da9p1 da10p1 da11p1 da12p1 da13p1 da14p1 da15p1 da16p1 = da17p1 >>=20 >> I confirm ashift=3D12: >> $ sudo zdb zav | grep ashift >> ashift: 12 >> ashift: 12 >>=20 >> "zpool list" approximately matches the expected raw capacity of = 16*4000682172416 =3D 64010914758656 bytes (58.28 TiB). >> $ zpool list zav >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> zav 58T 1.34M 58.0T 0% 1.00x ONLINE - >>=20 >> For raidz2, I'd expect to see 4000682172416*14 =3D 56009550413824 = bytes (50.94 TiB). However, I only get: >> $ zfs list zav >> NAME USED AVAIL REFER MOUNTPOINT >> zav 1.10M 46.8T 354K /zav >>=20 >> Or using df for greater accuracy: >> $ df zav >> Filesystem 1K-blocks Used Avail Capacity Mounted on >> zav 50288393472 354 50288393117 0% /zav >>=20 >> A total of 51495314915328 bytes (46.83TiB). (This is for a freshly = created zpool before any snapshots, etc. have been performed.) >>=20 >> I measure overhead as "expected - actual / expected", which in the = case of 4k sector (ashift=3D12) raidz2 comes to 8.05%. >>=20 >> To create a 512-byte sector (ashift=3D9) raidz2 pool, I basically = just replace "da2p1.nop" with "da2p1" when creating the zpool. I confirm = ashift=3D9. zpool raw size is the same (as much as I can tell with such = limited precision from zpool list). However, the available size = according to zfs list/df is 54560512935936 bytes (49.62 TiB), which = amounts to an overhead of 2.58%. There are some minor differences in = ALLOC and USED size listings, so I repeat them here for the 512-byte = sector raidz2 pool: >> $ zpool list zav >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> zav 58T 228K 58.0T 0% 1.00x ONLINE - >> $ zfs list zav >> NAME USED AVAIL REFER MOUNTPOINT >> zav 198K 49.6T 73.0K /zav >> $ df zav >> Filesystem 1K-blocks Used Avail Capacity Mounted on >> zav 53281750914 73 53281750841 0% /zav >>=20 >> I expect some overhead from ZFS and according to this blog post: >> http://www.cuddletech.com/blog/pivot/entry.php?id=3D1013 >> (via = http://mail.opensolaris.org/pipermail/zfs-discuss/2010-May/041773.html) >> there may be a 1/64 or 1.56% overhead baked into ZFS. Interestingly = enough, when I create a pool with no raid/mirroring, I get an overhead = of 1.93% regardless of ashift=3D9 or ashift=3D12 which is quite close to = the 1/64 number. I have also tested raidz, which has similar behavior to = raidz2, however the overhead is slightly less in each case: 1) ashift=3D9 = raidz overhead is 2.33% and 2) ashift=3D12 raidz overhead is 7.04%. >>=20 >> In order to preserve space, I've put the zdb listings for both = ashift=3D9 and ashift=3D12 radiz2 pools here: >> http://pastebin.com/v2xjZkNw >>=20 >> There are also some differences in ZDB output, for example "SPA = allocated" is higher for in the 4K sector raidz2 pool, which seems = interesting, although I don't comprehend the significance of = this._______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>=20 >>=20 >=20 From owner-freebsd-fs@FreeBSD.ORG Sat Mar 24 18:41:21 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D23F31065670 for ; Sat, 24 Mar 2012 18:41:21 +0000 (UTC) (envelope-from j.freebsd-zfs@enone.net) Received: from flabnapple.net (flabnapple.net [216.129.104.99]) by mx1.freebsd.org (Postfix) with ESMTP id BA4D38FC16 for ; Sat, 24 Mar 2012 18:41:21 +0000 (UTC) Received: from [10.0.1.6] (c-98-207-6-192.hsd1.ca.comcast.net [98.207.6.192]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by flabnapple.net (Postfix) with ESMTPSA id 4C3A01CC056; Sat, 24 Mar 2012 11:41:21 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii From: Taylor In-Reply-To: <20120324174218.00005f63@unknown> Date: Sat, 24 Mar 2012 11:41:20 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <45654FDD-A20A-47C8-B3B5-F9B0B71CC38B@enone.net> <20120324174218.00005f63@unknown> To: Alexander Leidinger X-Mailer: Apple Mail (2.1084) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS extra space overhead for ashift=12 vs ashift=9 raidz2 pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Mar 2012 18:41:21 -0000 Alex, Thank you for your response. I'm not particularly concerned about the = overhead of file fragmentation, as most of the space will be take by fairly large files (10's of GiB).=20= My original question concerned the amount of space reported available by = zfs for a freshly-created *empty* raidz2 filesystem. To re-iterate, I find 2.79TiB more space available with ashift=3D9 = (49.62 TiB) vs ashift=3D12 (46.83TiB) for a new 3.64TiB 16-disk raidz2 pool. (I'd like to keep the 4K sector size, because in my limited performance = testing I can write to the the 4K sector size (ashift=3D12) array at ~271MiB/s vs ~228 MiB/s for = the 512-byte sector size (ashift=3D9).) Is this extra filesystem overhead expected for empty ashift=3D12 raidz2 = pools?=20 Is there anyway to reduce this overhead? Cheers, -Taylor On Mar 24, 2012, at 9:42 AM, Alexander Leidinger wrote: > On Fri, 23 Mar 2012 09:30:50 -0700 Taylor > wrote: >=20 >> I'm bringing up a new ZFS filesystem and have noticed something >> strange with respect to the overhead from ZFS. When I create a raidz2 >> pool with 512-byte sectors (ashift=3D9), I have an overhead of 2.59%, >> but when I create the zpool using 4k sectors (ashift=3D12), I have an >> overhead of 8.06%. This amounts to a difference of 2.79TiB in my >> particular application, which I'd like to avoid. :) >>=20 >> (Assuming I haven't done anything wrong. :) ) Is the extra overhead >> for 4k sector (ashift=3D12) raidz2 pools expected? Is there any way = to >> reduce this? >=20 > This depends upon the data you write. >=20 > If your data is always a multiple of 4k, you will have probably less > overhead (there is probably still overhead from ZFS metadata). >=20 > If your data is always only a multiple of 512 byte, you would have = much > less overhead on a ashift=3D9 FS than on a ashift=3D12 FS. >=20 > If the size of your data is random, and always less than 4k, you have > more overhead than if the size of your data is random and always > several GB big. >=20 > Bye, > Alexander. >=20 > --=20 > http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID =3D = B0063FE7 > http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID =3D = 72077137 >=20