From owner-freebsd-fs@FreeBSD.ORG Mon Sep 28 11:06:54 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 11F64106566C for ; Mon, 28 Sep 2009 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id E94F18FC17 for ; Mon, 28 Sep 2009 11:06:53 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8SB6rfS063991 for ; Mon, 28 Sep 2009 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8SB6rgQ063987 for freebsd-fs@FreeBSD.org; Mon, 28 Sep 2009 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 28 Sep 2009 11:06:53 GMT Message-Id: <200909281106.n8SB6rgQ063987@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Sep 2009 11:06:54 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/138790 fs [zfs] ZFS ceases caching when mem demand is high o kern/138524 fs [msdosfs] disks and usb flashes/cards with Russian lab o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138367 fs [tmpfs] [panic] 'panic: Assertion pages > 0 failed' wh o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/138109 fs [extfs] [patch] Minor cleanups to the sys/gnu/fs/ext2f f kern/137037 fs [zfs] [hang] zfs rollback on root causes FreeBSD to fr o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic o kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135594 fs [zfs] Single dataset unresponsive with Samba o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o bin/135314 fs [zfs] assertion failed for zdb(8) usage o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot f kern/134496 fs [zfs] [panic] ZFS pool export occasionally causes a ke o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133980 fs [panic] [ffs] panic: ffs_valloc: dup alloc o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133614 fs [smbfs] [panic] panic: ffs_truncate: read-only filesys o kern/133373 fs [zfs] umass attachment causes ZFS checksum errors, dat o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int f kern/133150 fs [zfs] Page fault with ZFS on 7.1-RELEASE/amd64 while w o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132597 fs [tmpfs] [panic] tmpfs-related panic while interrupting o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131995 fs [nfs] Failure to mount NFSv4 server o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/131086 fs [ext2fs] [patch] mkfs.ext2 creates rotten partition o kern/130979 fs [smbfs] [panic] boot/kernel/smbfs.ko o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130229 fs [iconv] usermount fails on fs that need iconv o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129059 fs [zfs] [patch] ZFS bootloader whitelistable via WITHOUT f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/128633 fs [zfs] [lor] lock order reversal in zfs f kern/128173 fs [ext2fs] ls gives "Input/output error" on mounted ext3 o kern/127659 fs [tmpfs] tmpfs memory leak o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127213 fs [tmpfs] sendfile on tmpfs data corruption o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS f kern/125536 fs [ext2fs] ext 2 mounts cleanly but fails on commands li f kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition f bin/124424 fs [zfs] zfs(8): zfs list -r shows strange snapshots' siz o kern/123939 fs [msdosfs] corrupts new files o kern/122888 fs [zfs] zfs hang w/ prefetch on, zil off while running t o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o kern/122047 fs [ext2fs] [patch] incorrect handling of UF_IMMUTABLE / o kern/122038 fs [tmpfs] [panic] tmpfs: panic: tmpfs_alloc_vp: type 0xc o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121779 fs [ufs] snapinfo(8) (and related tools?) only work for t o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [fs] [snapshot] System crashes when manipulati o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/119735 fs [zfs] geli + ZFS + samba starting on boot panics 7.0-B o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o kern/116913 fs [ffs] [panic] ffs_blkfree: freeing free block p kern/116608 fs [msdosfs] [patch] msdosfs fails to check mount options o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/115645 fs [snapshots] [panic] lockmgr: thread 0xc4c00d80, not ex o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b f usb/112640 fs [ext2fs] [hang] Kernel freezes when writing a file to o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] mount_msdosfs: msdosfs_iconv: Operation not o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/105093 fs [ext2fs] [patch] ext2fs on read-only media cannot be m o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [iso9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna f kern/91568 fs [ufs] [panic] writing to UFS/softupdates DVD media in o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/89991 fs [ufs] softupdates with mount -ur causes fs UNREFS o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o kern/85326 fs [smbfs] [panic] saving a file via samba to an overquot o kern/84589 fs [2TB] 5.4-STABLE unresponsive during background fsck 2 o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o kern/77826 fs [ext2fs] ext2fs usb filesystem will not mount RW o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 140 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 28 17:26:08 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5A9931065694; Mon, 28 Sep 2009 17:26:08 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 32BDE8FC08; Mon, 28 Sep 2009 17:26:08 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8SHQ831059347; Mon, 28 Sep 2009 17:26:08 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8SHQ761059343; Mon, 28 Sep 2009 17:26:07 GMT (envelope-from pjd) Date: Mon, 28 Sep 2009 17:26:07 GMT Message-Id: <200909281726.n8SHQ761059343@freefall.freebsd.org> To: uqs@spoerlein.net, pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: bin/135314: [zfs] assertion failed for zdb(8) usage X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Sep 2009 17:26:08 -0000 Synopsis: [zfs] assertion failed for zdb(8) usage State-Changed-From-To: open->feedback State-Changed-By: pjd State-Changed-When: pon 28 wrz 2009 17:25:25 UTC State-Changed-Why: Can you reproduce it on 8.0-RC1? It works for me. Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: pon 28 wrz 2009 17:25:25 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=135314 From owner-freebsd-fs@FreeBSD.ORG Mon Sep 28 17:29:20 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7EBB1106566B for ; Mon, 28 Sep 2009 17:29:20 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-gx0-f214.google.com (mail-gx0-f214.google.com [209.85.217.214]) by mx1.freebsd.org (Postfix) with ESMTP id 3D29C8FC14 for ; Mon, 28 Sep 2009 17:29:20 +0000 (UTC) Received: by gxk6 with SMTP id 6so2607066gxk.13 for ; Mon, 28 Sep 2009 10:29:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=7vMJFk8K8vQeTJlZNIBcmGYYE1csUVUMHwhXVivf+UA=; b=ot8C2e9Xtj2ZAgBxljzl7X9nJssuCzFg2EDph9eBDFYaYoM57KR//KDayiXSlF+fuT gu9FyO/Od4ctmXdZwU3rLGgiKiqxXUkFslAGujhzOOta6e+eFC2uLBsPiZQJk7wBGWVu 7qP0jfhXd62tXCYtKA7bZRe972NrR8ETnndYk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=VypD2tjjF1h099c4ntYlgQQGwxxjgxP39XtSbT+JDutzoYHt13tog5LskiQFn6v/+z arMOOBhc2vpkIbfGpCwkyQlQF1y6Vz06s38C4aMbsgpC0cna0M8cnqZf/xm+ApOTYD9O sPQn1fBknq/kOwyFMCIYHHjloaE3yz9MhadYI= MIME-Version: 1.0 Received: by 10.90.182.20 with SMTP id e20mr3067364agf.106.1254158959354; Mon, 28 Sep 2009 10:29:19 -0700 (PDT) In-Reply-To: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> Date: Mon, 28 Sep 2009 11:29:19 -0600 Message-ID: <2a5e326f0909281029p17334ceeoff4bb3e7adeb5cef@mail.gmail.com> From: Kurt Touet To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Sep 2009 17:29:20 -0000 I've run into a similar experience again with my zfs raidz1 array reporting itself as healthy when it's not. This, again, was after some drive spin_retry_count errors (and a power cycle when unable to shutdown -h). The pattern goes as follows: 1) A hard drive in the zfs array (for whatever reason) repeatedly times out.. in this case, generating spin_retry_count errors in the smart status. 2) The box is semi-frozen because it cannot deal with activity on the zfs array, so it won't gracefully shutdown -h now. 3) The box is power cycled. 4) Everything spins up fine on the box, the array is now accessible. 5) zpool status - shows the array as online with no degraded status 6) zpool scrub - shows the drives to be desynced and resilvers a couple of them 7) presumably, everything is fine monolith# zpool status pool: storage state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad14 ONLINE 0 0 0 ad6 ONLINE 0 0 0 ad12 ONLINE 0 0 0 ad4 ONLINE 0 0 0 spares ad22 AVAIL errors: No known data errors monolith# zpool scrub storage monolith# zpool status pool: storage state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Mon Sep 28 11:17:05 2009 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad14 ONLINE 0 0 0 1.17M resilvered ad6 ONLINE 0 0 0 1.50K resilvered ad12 ONLINE 0 0 0 2K resilvered ad4 ONLINE 0 0 0 2K resilvered spares ad22 AVAIL errors: No known data errors So, my question still stands.. how does zfs upon scrubbing, instantly know that the drives need to be resilvered (it completes in a few seconds), but previous declares the array to be fine with no known date errors? Cheers, -kurt From owner-freebsd-fs@FreeBSD.ORG Mon Sep 28 17:52:33 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CEFDF1065672; Mon, 28 Sep 2009 17:52:33 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A5D0C8FC0C; Mon, 28 Sep 2009 17:52:33 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8SHqX1O090333; Mon, 28 Sep 2009 17:52:33 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8SHqX7i090329; Mon, 28 Sep 2009 17:52:33 GMT (envelope-from pjd) Date: Mon, 28 Sep 2009 17:52:33 GMT Message-Id: <200909281752.n8SHqX7i090329@freefall.freebsd.org> To: serenity@exscape.org, pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: kern/134496: [zfs] [panic] ZFS pool export occasionally causes a kernel panic ("vrele: negative ref cnt") X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Sep 2009 17:52:33 -0000 Synopsis: [zfs] [panic] ZFS pool export occasionally causes a kernel panic ("vrele: negative ref cnt") State-Changed-From-To: feedback->closed State-Changed-By: pjd State-Changed-When: pon 28 wrz 2009 17:49:56 UTC State-Changed-Why: I don't believe this problem still exists and I cannot reproduce it with proposed procedure. Let me know if the problem still exists in 8.0. Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: pon 28 wrz 2009 17:49:56 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=134496 From owner-freebsd-fs@FreeBSD.ORG Mon Sep 28 17:59:57 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9292C1065670; Mon, 28 Sep 2009 17:59:57 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 6AB168FC13; Mon, 28 Sep 2009 17:59:57 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8SHxvlv090512; Mon, 28 Sep 2009 17:59:57 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8SHxvBW090508; Mon, 28 Sep 2009 17:59:57 GMT (envelope-from pjd) Date: Mon, 28 Sep 2009 17:59:57 GMT Message-Id: <200909281759.n8SHxvBW090508@freefall.freebsd.org> To: snnn119@gmail.com, pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: kern/128633: [zfs] [lor] lock order reversal in zfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Sep 2009 17:59:57 -0000 Synopsis: [zfs] [lor] lock order reversal in zfs State-Changed-From-To: open->feedback State-Changed-By: pjd State-Changed-When: pon 28 wrz 2009 17:58:06 UTC State-Changed-Why: Can you show how how lines around line 1123 in sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c looks like on the source your kernel was compiled from? Same for sys/kern/vfs_subr.c:372. Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: pon 28 wrz 2009 17:58:06 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=128633 From owner-freebsd-fs@FreeBSD.ORG Mon Sep 28 18:41:44 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9BCE01065676 for ; Mon, 28 Sep 2009 18:41:44 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-vw0-f180.google.com (mail-vw0-f180.google.com [209.85.212.180]) by mx1.freebsd.org (Postfix) with ESMTP id 562A88FC1F for ; Mon, 28 Sep 2009 18:41:44 +0000 (UTC) Received: by vws10 with SMTP id 10so3713903vws.7 for ; Mon, 28 Sep 2009 11:41:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type:content-transfer-encoding; bh=WZGYXtPFAcbT6/t4kK8kvHTvRx6cJt9D5tkwcPP4pmY=; b=inKFxftmUaa2bQBLanLV22QS4FQ/n0nrFkYQGS1THi6jiQtYo9zboR0Fg2Jj0jP+m2 uVTLAETQr2F+dTsUbxa6RmjqX0Pe8861Ew8c1wmkKL0rRGYDS8CqKu+bMCi2iIWr7uzV l48frGVwPm2plsezAxTf1YUe0jqAm4K+nkZ5M= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; b=kNYBPMfXXSO5u9JQtD2uz9fh/+ykTybdgfOlLL4QZ7tGDHpdD0fwHqbB39RtU3Sb8Z 0ciG4VgxncdXXscfHvCVwBYaNwsnMeY57g1vLXQqv5xhL2rJSeZKmBj92AYYis9FwIGV iXwIpbchNQ9a7rq/JISluNDMGHfdNTWijyKFw= MIME-Version: 1.0 Received: by 10.220.88.25 with SMTP id y25mr6198058vcl.66.1254162010838; Mon, 28 Sep 2009 11:20:10 -0700 (PDT) Date: Mon, 28 Sep 2009 14:20:10 -0400 Message-ID: <5da0588e0909281120p301bf75fi1bfda50c1a0a7ef0@mail.gmail.com> From: Rich To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: zpool replace 'stuck' X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Sep 2009 18:41:44 -0000 Hello, world. On 8.0-RC1 amd64, I seem to have run into the same wedge condition noted in http://markmail.org/message/khini6ifoty2ecfd. Unfortunately, however, I can't use the workaround of removing the drive, aborting the scrub, and detaching one of the replacements, as it reports thus: [root@manticore ~]# zpool detach bukkit 7303939385138290847 cannot detach 7303939385138290847: no valid replicas [root@manticore ~]# zpool detach bukkit da14 cannot detach da14: no valid replicas da14 is offline (physically not attached), and has been for several reboots= . Thoughts? - Rich --=20 Os amigos s=E3o a forma de Deus cuidar de n=F3s. From owner-freebsd-fs@FreeBSD.ORG Tue Sep 29 07:18:32 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B1DC11065672; Tue, 29 Sep 2009 07:18:32 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 87F408FC16; Tue, 29 Sep 2009 07:18:32 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8T7IWPH019860; Tue, 29 Sep 2009 07:18:32 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8T7IWxO019856; Tue, 29 Sep 2009 07:18:32 GMT (envelope-from pjd) Date: Tue, 29 Sep 2009 07:18:32 GMT Message-Id: <200909290718.n8T7IWxO019856@freefall.freebsd.org> To: jbsnyder@gmail.com, pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: kern/122888: [zfs] zfs hang w/ prefetch on, zil off while running transmission-daemon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Sep 2009 07:18:32 -0000 Synopsis: [zfs] zfs hang w/ prefetch on, zil off while running transmission-daemon State-Changed-From-To: open->feedback State-Changed-By: pjd State-Changed-When: pon 28 wrz 2009 18:02:24 UTC State-Changed-Why: How much RAM do you systems have? PS. zfs:lo comes from zfs:lowmem, it means that process is stuck in ZFS trying to free memory, but if it is stuck for good, it means that ZFS cannot reclaim any memory. Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: pon 28 wrz 2009 18:02:24 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=122888 From owner-freebsd-fs@FreeBSD.ORG Tue Sep 29 11:04:39 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 66B991065696 for ; Tue, 29 Sep 2009 11:04:39 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id D5A228FC1D for ; Tue, 29 Sep 2009 11:04:38 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 8E2E913004E; Tue, 29 Sep 2009 12:45:25 +0200 (CEST) X-CRM114-Version: 20090423-BlameSteveJobs ( TRE 0.7.6 (BSD) ) MF-ACE0E1EA [pR: 15.2562] X-CRM114-CacheID: sfid-20090929_12452_1600AAAB X-CRM114-Status: Good ( pR: 15.2562 ) Message-ID: <4AC1E540.9070001@fsn.hu> Date: Tue, 29 Sep 2009 12:45:20 +0200 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org X-Stationery: 0.4.10 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.3 (people.fsn.hu); Tue, 29 Sep 2009 12:45:24 +0200 (CEST) Cc: Subject: ARC size constantly shrinks, then ZFS slows down extremely X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Sep 2009 11:04:39 -0000 Hello, I'm using FreeBSD 8 (previously 7) on a machine with a lot of disks and 32 GB RAM. With 7.x it ran very well for about 50 days, but suddenly every operation have slowed down. gstat showed that the disks are working a lot more than usual the zpool/zfs was pretty unusable. I've rebooted the machine then with FreeBSD 8 in the hope the new ZFS fixes will correct this issue (no 50 days have passed since then, so I don't know yet) and started to monitor ZFS's statistics. It seems that after a reboot, the ARC size starts to grow, then something flips the switch and it changes to shrinking, instead of maintaining the size. Please see the pictures here: http://people.fsn.hu/~bra/freebsd/20090929-zfs-arcsize/ Before the 27th, the machine ran FreeBSD 7, after that date it runs 8. As you can see, no user process tooks the memory, so I don't know why the ARC size grows first and then start to decrease. Could it be that the ARC size decreases such a big amount that it effectively disappears and this causes the IO activity go up and kill the machine? Thanks, From owner-freebsd-fs@FreeBSD.ORG Tue Sep 29 15:41:16 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0564B1065692 for ; Tue, 29 Sep 2009 15:41:16 +0000 (UTC) (envelope-from snnn119@gmail.com) Received: from mail-pz0-f194.google.com (mail-pz0-f194.google.com [209.85.222.194]) by mx1.freebsd.org (Postfix) with ESMTP id C8D3E8FC13 for ; Tue, 29 Sep 2009 15:41:15 +0000 (UTC) Received: by pzk32 with SMTP id 32so3442995pzk.3 for ; Tue, 29 Sep 2009 08:41:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from :user-agent:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; bh=Oa5mefsbtubzU6aJpcTs8uTGcgU8SLNbZWjoaqik4aI=; b=d6th0OxiHA5oO9kg7UaTwP2k028q0eu3xpgLu8vT+s57h2ZXhySQAYH0HXH5mf5438 3e3U4+ZKPTsZ3BVwP6cRLe8JdXlKL1pqZFcaS4LnBiiZK70D24yjH0sVWk1d7tA0aRRS Ccx/WQyy/dIn5lVpOAZTXw/vRizjFJdyqNUiY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; b=GJ/D4gzcsVF9HaKiCjCCnD0lyqY6RHba7OgXb33Fqt8/t8oC1bLww4T35eTdLC7PKB fj5nKRWDATmpopMGV6D1sQF1y7bzTj1taNS+uqlPr6Mkb4nvmevu+AVrqsEm2W4FRxdE I+N5xLel4BYrVfbOhjqPSHwjhgVKyN75UfT8Y= Received: by 10.115.67.10 with SMTP id u10mr8241502wak.203.1254237369794; Tue, 29 Sep 2009 08:16:09 -0700 (PDT) Received: from ?192.168.0.100? ([117.79.69.149]) by mx.google.com with ESMTPS id 22sm2880366pzk.10.2009.09.29.08.16.07 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 29 Sep 2009 08:16:08 -0700 (PDT) Message-ID: <4AC224B7.3000201@gmail.com> Date: Tue, 29 Sep 2009 23:16:07 +0800 From: snnn User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: pjd@FreeBSD.org References: <200909281759.n8SHxvBW090508@freefall.freebsd.org> In-Reply-To: <200909281759.n8SHxvBW090508@freefall.freebsd.org> Content-Type: text/plain; charset=gb18030; format=flowed Content-Transfer-Encoding: 8bit Cc: freebsd-fs@FreeBSD.org Subject: Re: kern/128633: [zfs] [lor] lock order reversal in zfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Sep 2009 15:41:16 -0000 pjd@FreeBSD.org дµÀ: > Synopsis: [zfs] [lor] lock order reversal in zfs > > State-Changed-From-To: open->feedback > State-Changed-By: pjd > State-Changed-When: pon 28 wrz 2009 17:58:06 UTC > State-Changed-Why: > Can you show how how lines around line 1123 in > sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c looks like > on the source your kernel was compiled from? Same for sys/kern/vfs_subr.c:372. > > I'm sorry.It is too far from this pr was sent. The kernel which I were used is 8.0-CURRENT-200810,it isn't compiled by myself but downloaded at ftp.freebsd.org From owner-freebsd-fs@FreeBSD.ORG Wed Sep 30 02:01:34 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CA767106566B for ; Wed, 30 Sep 2009 02:01:34 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-pz0-f202.google.com (mail-pz0-f202.google.com [209.85.222.202]) by mx1.freebsd.org (Postfix) with ESMTP id A09618FC1E for ; Wed, 30 Sep 2009 02:01:34 +0000 (UTC) Received: by pzk40 with SMTP id 40so3911731pzk.7 for ; Tue, 29 Sep 2009 19:01:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=NSX21C9tlaqm4qhQUBKcfFws/vEiZV4YcJUTOux7Yqc=; b=n9laa0+/sEaquL5ScDYAk4Pyxigyx5pqwzOZ4gn4Ai65d8W3+euS/y6LYD6wdLJ/0d 6MC5JVAH8dOuBMyHYCETqH8Jzb0koY3flV0v+9C3Y/TfQC06Mg0YzUiMNUvGeu0J4NJJ HBBmMqAIi9y0VY+tMJhKeahoPL3NvgnEHA/qQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=gCx5zjkPZU/WalNcO97k/V9nCFa3yUDPWu9CtNvgnlj48TXHAOgtrUrtaNgxnMn1+W gwBfz1U1QguqFKsnA2+bJNHmkqugUPrSLHFmOiekZZuiW3ddyWueoB1cYa4zut3U8eB8 3WdIKoe1YltNXSYxs7Ce9lPVU9oglMLA4NNMk= MIME-Version: 1.0 Received: by 10.115.117.34 with SMTP id u34mr9440929wam.193.1254276093919; Tue, 29 Sep 2009 19:01:33 -0700 (PDT) In-Reply-To: <5da0588e0909281120p301bf75fi1bfda50c1a0a7ef0@mail.gmail.com> References: <5da0588e0909281120p301bf75fi1bfda50c1a0a7ef0@mail.gmail.com> Date: Tue, 29 Sep 2009 22:01:33 -0400 Message-ID: <5da0588e0909291901o328f9b13x3eb61a93b98ddc37@mail.gmail.com> From: Rich To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: zpool replace 'stuck' X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Sep 2009 02:01:34 -0000 Oh, what a horrid fix! We modified src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:~3577 = from: VERIFY(spa_scrub(spa, POOL_SCRUB_RESILVER) =3D=3D 0); to: spa_scrub(spa, POOL_SCRUB_RESILVER); and ~3469: if (type =3D=3D POOL_SCRUB_EVERYTHING && to: if ((type =3D=3D POOL_SCRUB_EVERYTHING || type =3D=3D POOL_SCRUB_RE= SILVER) && This allowed the scrub to finish sanely. Why is it sane to allow it to abort and start a new resilver when in mid-resilver? - Rich On Mon, Sep 28, 2009 at 2:20 PM, Rich wrote: > Hello, world. > > On 8.0-RC1 amd64, I seem to have run into the same wedge condition > noted in http://markmail.org/message/khini6ifoty2ecfd. > > Unfortunately, however, I can't use the workaround of removing the > drive, aborting the scrub, and detaching one of the replacements, as > it reports thus: > [root@manticore ~]# zpool detach bukkit 7303939385138290847 > cannot detach 7303939385138290847: no valid replicas > [root@manticore ~]# zpool detach bukkit da14 > cannot detach da14: no valid replicas > > da14 is offline (physically not attached), and has been for several reboo= ts. > > Thoughts? > > - Rich > > -- > > Os amigos s=E3o a forma de Deus cuidar de n=F3s. > --=20 As senadoras? Sim! Missa, roda nessa! -- pal=EDndromo From owner-freebsd-fs@FreeBSD.ORG Wed Sep 30 16:12:50 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 91AD7106568D; Wed, 30 Sep 2009 16:12:50 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 69AC08FC19; Wed, 30 Sep 2009 16:12:50 +0000 (UTC) Received: from freefall.freebsd.org (linimon@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8UGCo52035814; Wed, 30 Sep 2009 16:12:50 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8UGCocI035810; Wed, 30 Sep 2009 16:12:50 GMT (envelope-from linimon) Date: Wed, 30 Sep 2009 16:12:50 GMT Message-Id: <200909301612.n8UGCocI035810@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/139198: [nfs] Page Fault out of NLM X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Sep 2009 16:12:50 -0000 Old Synopsis: Page Fault out of NLM New Synopsis: [nfs] Page Fault out of NLM Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Sep 30 16:12:16 UTC 2009 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=139198 From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 09:21:05 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 97AB71065672; Thu, 1 Oct 2009 09:21:05 +0000 (UTC) (envelope-from olivier@gid0.org) Received: from mail-ew0-f209.google.com (mail-ew0-f209.google.com [209.85.219.209]) by mx1.freebsd.org (Postfix) with ESMTP id 105EE8FC17; Thu, 1 Oct 2009 09:21:04 +0000 (UTC) Received: by ewy5 with SMTP id 5so1493570ewy.36 for ; Thu, 01 Oct 2009 02:21:04 -0700 (PDT) MIME-Version: 1.0 Received: by 10.216.89.14 with SMTP id b14mr186919wef.76.1254388863797; Thu, 01 Oct 2009 02:21:03 -0700 (PDT) In-Reply-To: <200909230920.n8N9KIJ6005528@freefall.freebsd.org> References: <200909230920.n8N9KIJ6005528@freefall.freebsd.org> Date: Thu, 1 Oct 2009 11:21:03 +0200 Message-ID: <367b2c980910010221kd388f43q8243797b4eac9af7@mail.gmail.com> From: Olivier Smedts To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: quoted-printable Cc: pjd@freebsd.org Subject: Re: kern/139072: [zfs] zfs marked as production ready but it used a deprecated checksum algorithm X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 09:21:05 -0000 Hello, 2009/9/23 : > Synopsis: [zfs] zfs marked as production ready but it used a deprecated c= hecksum algorithm > > Responsible-Changed-From-To: freebsd-fs->pjd > Responsible-Changed-By: pjd > Responsible-Changed-When: =B6ro 23 wrz 2009 09:19:57 UTC > Responsible-Changed-Why: > I'll take this one. > > http://www.freebsd.org/cgi/query-pr.cgi?pr=3D139072 Now that this PR is closed, is there something to change on *existing* zfs filesystems to make them use fletcher4 (for new data) when they have the default property "checksum=3Don" ? Is there something to do (other than dumping and restoring) to change checksums to fletcher4 for existing data and metadata ? Thanks, Olivier --=20 Olivier Smedts _ ASCII ribbon campaign ( ) e-mail: olivier@gid0.org - against HTML email & vCards X www: http://www.gid0.org - against proprietary attachments / \ "Il y a seulement 10 sortes de gens dans le monde : ceux qui comprennent le binaire, et ceux qui ne le comprennent pas." From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 09:43:31 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 15DA9106568B for ; Thu, 1 Oct 2009 09:43:31 +0000 (UTC) (envelope-from solon@pyro.de) Received: from srv23.fsb.echelon.bnd.org (mail.pyro.de [83.137.99.96]) by mx1.freebsd.org (Postfix) with ESMTP id BE3BD8FC08 for ; Thu, 1 Oct 2009 09:43:30 +0000 (UTC) Received: from port-87-193-183-44.static.qsc.de ([87.193.183.44] helo=flash.home) by srv23.fsb.echelon.bnd.org with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MtHaz-0000LW-OO for freebsd-fs@freebsd.org; Thu, 01 Oct 2009 11:05:09 +0200 Date: Thu, 1 Oct 2009 11:05:03 +0200 From: Solon Lutz X-Mailer: The Bat! (v3.99.25) Professional Organization: pyro.labs berlin X-Priority: 3 (Normal) Message-ID: <683849754.20091001110503@pyro.de> To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Spam-Score: -0.1 (/) X-Spam-Report: Spam detection software, running on the system "srv23.fsb.echelon.bnd.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see The administrator of that system for details. Content preview: Hi erverybody, I'm faced with a 10TB ZFS pool on a 12TB RAID6 Areca controller. And yes, I know, you shouldn't put a zpool on a RAID-device... =( Due to problems with a sata-cable, some days ago the raid-controller started to produce long timeouts to recover the resulting read errors. [...] Content analysis details: (-0.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.4 ALL_TRUSTED Passed through trusted hosts only via SMTP 1.3 PLING_QUERY Subject has exclamation mark and question mark X-Spam-Flag: NO Subject: Help needed! ZFS I/O error recovery? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 09:43:31 -0000 Hi erverybody, I'm faced with a 10TB ZFS pool on a 12TB RAID6 Areca controller. And yes, I know, you shouldn't put a zpool on a RAID-device... =( Due to problems with a sata-cable, some days ago the raid-controller started to produce long timeouts to recover the resulting read errors. The cable was replaced, a parity check was run on the RAID-Volume and showed no errors, the zfs scrub however showed some 'defective' files. After copying these files with 'dd -conv=noerror...' and comparing them to the originals, they were error-free. Yesterday however, three more defective cables forced the controller to take the RAID6 volume offline. Now all cables were replaced and a parity check was run on the RAID-Volume -> data integrity OK. But now ZFS refuses to mount all volumes: Solaris: WARNING: can't process intent log for temp/space1 Solaris: WARNING: can't process intent log for temp/space2 Solaris: WARNING: can't process intent log for temp/space3 Solaris: WARNING: can't process intent log for temp/space4 A scrub revealed to following: errors: Permanent errors have been detected in the following files: temp:<0x0> temp/space1:<0x0> temp/space2:<0x0> temp/space3:<0x0> temp/space4:<0x0> I tried to switch off checksums for this pool, but that didn't help in any way. I also mounted the pool by hand and was faced with with 'empty' volumes and 'I/O errors' when trying to list their contents... Any suggestions? I'm offering some self-made blackberry jam and raspberry brandy to the person who can help to restore or backup the data. Tech specs: FreeBSD 7.2-STABLE #21: Tue May 5 18:44:10 CEST 2009 (AMD64) da0 at arcmsr0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-5 device da0: 166.666MB/s transfers (83.333MHz DT, offset 32, 16bit) da0: Command Queueing Enabled da0: 10490414MB (21484367872 512 byte sectors: 255H 63S/T 1337340C) ZFS filesystem version 6 ZFS storage pool version 6 Best regards, Solon From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 10:31:15 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A45A01065670 for ; Thu, 1 Oct 2009 10:31:15 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello087206049004.chello.pl [87.206.49.4]) by mx1.freebsd.org (Postfix) with ESMTP id E62D68FC18 for ; Thu, 1 Oct 2009 10:31:14 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 1981445E49; Thu, 1 Oct 2009 12:31:12 +0200 (CEST) Received: from localhost (pdawidek.wheel.pl [10.0.1.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 34B7F45B36; Thu, 1 Oct 2009 12:31:07 +0200 (CEST) Date: Thu, 1 Oct 2009 12:31:10 +0200 From: Pawel Jakub Dawidek To: Olivier Smedts Message-ID: <20091001103110.GB1595@garage.freebsd.pl> References: <200909230920.n8N9KIJ6005528@freefall.freebsd.org> <367b2c980910010221kd388f43q8243797b4eac9af7@mail.gmail.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="MW5yreqqjyrRcusr" Content-Disposition: inline In-Reply-To: <367b2c980910010221kd388f43q8243797b4eac9af7@mail.gmail.com> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-5.9 required=4.5 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: kern/139072: [zfs] zfs marked as production ready but it used a deprecated checksum algorithm X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 10:31:15 -0000 --MW5yreqqjyrRcusr Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Oct 01, 2009 at 11:21:03AM +0200, Olivier Smedts wrote: > Hello, >=20 > 2009/9/23 : > > Synopsis: [zfs] zfs marked as production ready but it used a deprecated= checksum algorithm > > > > Responsible-Changed-From-To: freebsd-fs->pjd > > Responsible-Changed-By: pjd > > Responsible-Changed-When: =C5=9Bro 23 wrz 2009 09:19:57 UTC > > Responsible-Changed-Why: > > I'll take this one. > > > > http://www.freebsd.org/cgi/query-pr.cgi?pr=3D139072 >=20 > Now that this PR is closed, is there something to change on *existing* > zfs filesystems to make them use fletcher4 (for new data) when they > have the default property "checksum=3Don" ? Is there something to do > (other than dumping and restoring) to change checksums to fletcher4 > for existing data and metadata ? You have to manually change checksum to fletcher4 for existing datasets, I think and backup/restore your data. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --MW5yreqqjyrRcusr Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFKxITuForvXbEpPzQRAv4WAJ44VwTf+lkAZ9CaD+NOOSLDC8er7QCbBDhV sUmsZoA1VMKQUTbbMLjnQ1Q= =5JNn -----END PGP SIGNATURE----- --MW5yreqqjyrRcusr-- From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 13:51:27 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3A29B1065672; Thu, 1 Oct 2009 13:51:27 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (rrcs-24-73-246-106.sw.biz.rr.com [24.73.246.106]) by mx1.freebsd.org (Postfix) with ESMTP id C17B28FC1F; Thu, 1 Oct 2009 13:51:26 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id n91DpPJo095900; Thu, 1 Oct 2009 08:51:25 -0500 (CDT) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=eGQ1ykIaIn3Yp0IqYhu4eTwprU8tgZMUN+YT2a/CrrGHk+USPZxMvyVTxHmyzL+RE u2O80xDezC63e8UNPSYJZCd/HJ/UipBy9tYz0ebYMOHji8rhbFqGqRFUcUgwsCbMc/k iJIB58T5zZyhpX9Vfey3I/5VvNzfc04Lz0lHW+M= Message-ID: <4AC4B3DD.5050600@jrv.org> Date: Thu, 01 Oct 2009 08:51:25 -0500 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 To: Olivier Smedts References: <200909230920.n8N9KIJ6005528@freefall.freebsd.org> <367b2c980910010221kd388f43q8243797b4eac9af7@mail.gmail.com> In-Reply-To: <367b2c980910010221kd388f43q8243797b4eac9af7@mail.gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs , pjd@freebsd.org Subject: Re: kern/139072: [zfs] zfs marked as production ready but it used a deprecated checksum algorithm X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 13:51:27 -0000 Olivier Smedts wrote: > Hello, > > Now that this PR is closed, is there something to change on *existing* > zfs filesystems to make them use fletcher4 (for new data) when they > have the default property "checksum=on"? # zfs set checksum=fletcher4 pool > Is there something to do > (other than dumping and restoring) to change checksums to fletcher4 > for existing data and metadata ? No. Even "fletcher4" has the undesirable property that the checksum of every group of zeros, of any length, is the same as the initial value of the accumulator. This means that fletcher4 is insensitive to the number of leading zeros in the checksummed data. The ZFS team needs to revisit the checksum issue and add another algorithm but they have other things to worry about at the moment. Some SHA-3 contestants claim to be very fast though it's not clear they're fast enough to replace a true Fletcher sum in the real world, at least not yet. From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 16:55:11 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 20040106566B for ; Thu, 1 Oct 2009 16:55:11 +0000 (UTC) (envelope-from mj@feral.com) Received: from ns1.feral.com (ns1.feral.com [192.67.166.1]) by mx1.freebsd.org (Postfix) with ESMTP id D65608FC16 for ; Thu, 1 Oct 2009 16:55:10 +0000 (UTC) Received: from [10.8.0.2] (remotevpn [10.8.0.2]) by ns1.feral.com (8.14.3/8.14.3) with ESMTP id n91GOGfV025773 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 1 Oct 2009 09:24:19 -0700 (PDT) (envelope-from mj@feral.com) Message-ID: <4AC4D7AA.1080005@feral.com> Date: Thu, 01 Oct 2009 09:24:10 -0700 From: Matthew Jacob Organization: Feral Software User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.23) Gecko/20090825 SeaMonkey/1.1.18 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Default is to whitelist mail, not delayed by milter-greylist-4.2.3 (ns1.feral.com [10.8.0.1]); Thu, 01 Oct 2009 09:24:19 -0700 (PDT) Cc: Subject: review needed for a simple fix to growfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: mjacob@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 16:55:11 -0000 I have a simple fix to growfs that eliminates some issues Doug was seeing. I probably don't know enough to know some of the implications of the change and wonder if anyone would care to comment on it? The symptoms would be fsck failures on the grown filesystem- my take is that it was because the new cylinder groups were being initialized as having all the inodes allocated. This is puzzling me because, like, how could this ever have worked? It was trivial for me to reproduce- see http://people.freebsd.org/~mjacob/growfs.failure.txt. The second change, btw, is not essential- it just adjusts maxino down if you had to drop the number of cylinder groups down. Index: growfs.c =================================================================== --- growfs.c (revision 197658) +++ growfs.c (working copy) @@ -401,7 +401,6 @@ acg.cg_magic = CG_MAGIC; acg.cg_cgx = cylno; acg.cg_niblk = sblock.fs_ipg; - acg.cg_initediblk = sblock.fs_ipg; acg.cg_ndblk = dmax - cbase; if (sblock.fs_contigsumsize > 0) acg.cg_nclusterblks = acg.cg_ndblk / sblock.fs_frag; @@ -2217,6 +2216,7 @@ printf("Warning: %jd sector(s) cannot be allocated.\n", (intmax_t)fsbtodb(&sblock, sblock.fs_size % sblock.fs_fpg)); sblock.fs_size = sblock.fs_ncg * sblock.fs_fpg; + maxino -= sblock.fs_ipg; } /* From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 20:48:29 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 454D31065676 for ; Thu, 1 Oct 2009 20:48:29 +0000 (UTC) (envelope-from pepelac@gmail.com) Received: from mail-ew0-f209.google.com (mail-ew0-f209.google.com [209.85.219.209]) by mx1.freebsd.org (Postfix) with ESMTP id C417F8FC14 for ; Thu, 1 Oct 2009 20:48:28 +0000 (UTC) Received: by ewy5 with SMTP id 5so616445ewy.36 for ; Thu, 01 Oct 2009 13:48:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=YtIyF5Q+eFVDZhzsOzJ0fl2i+Y7kB8P0/fFfk3yZVmw=; b=xxF25U5Vx17MIO2IlU7q/Q3ccvmAr+rQpkwXV7Qzlk+B/XVhxaa5ZVydq827sXjl9m OAchD3csohUjihKiR9uzGzgD8MA+KJ9VOhWT4pZ56GplLylfvkUFJQZbL/M1eIWsN6uJ Gf99s6GFOa8lBbZLd9ludUE9p24zOeu9+LnG8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=mCWuyZS8nT1b/hUDbuc0rNkox65P5dp/Vx42mqai4dblEIAWUH0zd0Pnk9WDDdTeil Ysrnl1MIp6s62sT/tWHxhx8STp13RMbWhuA989bUTgxbJm0i9rS0ZIFyOF2/sjJbHZ4i wmHah3Po/EMbZ6uBhqQCP3ze/Za0p2/wq6LR0= MIME-Version: 1.0 Received: by 10.211.143.9 with SMTP id v9mr1935096ebn.53.1254428547896; Thu, 01 Oct 2009 13:22:27 -0700 (PDT) Date: Fri, 2 Oct 2009 00:22:27 +0400 Message-ID: <8c9ae7950910011322j1a6b66fcp73615cc17ae20328@mail.gmail.com> From: Alexander Shevchenko To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: ARC & L2ARC efficiency X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 20:48:29 -0000 Good time of day! How could i check the efficiency of ARC? Are total reads from pool equal kstat.zfs.misc.arcstats.hits + kstat.zfs.misc.arcstats.misses, or this values are just reads from cache? By efficiency i mean reads_from_cache/(reads_from_cache+reads_from_drives) Are there any document where kstat values described? zpool status pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 da2 ONLINE 0 0 0 da4 ONLINE 0 0 0 cache da3 ONLINE 0 0 0 #sysctl kstat kstat.zfs.misc.arcstats.hits: 282927703 kstat.zfs.misc.arcstats.misses: 66220328 kstat.zfs.misc.arcstats.demand_data_hits: 164374119 kstat.zfs.misc.arcstats.demand_data_misses: 6615511 kstat.zfs.misc.arcstats.demand_metadata_hits: 88715021 kstat.zfs.misc.arcstats.demand_metadata_misses: 4464890 kstat.zfs.misc.arcstats.prefetch_data_hits: 28851210 kstat.zfs.misc.arcstats.prefetch_data_misses: 55109950 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 987353 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 29977 kstat.zfs.misc.arcstats.mru_hits: 44560461 kstat.zfs.misc.arcstats.mru_ghost_hits: 1493532 kstat.zfs.misc.arcstats.mfu_hits: 211027800 kstat.zfs.misc.arcstats.mfu_ghost_hits: 16337660 kstat.zfs.misc.arcstats.deleted: 49112923 kstat.zfs.misc.arcstats.recycle_miss: 9574100 kstat.zfs.misc.arcstats.mutex_miss: 252423 kstat.zfs.misc.arcstats.evict_skip: 2269320648 kstat.zfs.misc.arcstats.hash_elements: 644877 kstat.zfs.misc.arcstats.hash_elements_max: 678888 kstat.zfs.misc.arcstats.hash_collisions: 21697862 kstat.zfs.misc.arcstats.hash_chains: 182323 kstat.zfs.misc.arcstats.hash_chain_max: 9 kstat.zfs.misc.arcstats.p: 1251375616 kstat.zfs.misc.arcstats.c: 1252817408 kstat.zfs.misc.arcstats.c_min: 1252817408 kstat.zfs.misc.arcstats.c_max: 10022539264 kstat.zfs.misc.arcstats.size: 1237578176 kstat.zfs.misc.arcstats.hdr_size: 9610640 kstat.zfs.misc.arcstats.l2_hits: 12905801 kstat.zfs.misc.arcstats.l2_misses: 680 kstat.zfs.misc.arcstats.l2_feeds: 52666 kstat.zfs.misc.arcstats.l2_rw_clash: 680 kstat.zfs.misc.arcstats.l2_writes_sent: 41330 kstat.zfs.misc.arcstats.l2_writes_done: 41330 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 62 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 53 kstat.zfs.misc.arcstats.l2_evict_reading: 5 kstat.zfs.misc.arcstats.l2_free_on_write: 30044 kstat.zfs.misc.arcstats.l2_abort_lowmem: 309837 kstat.zfs.misc.arcstats.l2_cksum_bad: 0 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 79319831552 kstat.zfs.misc.arcstats.l2_hdr_size: 134102528 kstat.zfs.misc.arcstats.memory_throttle_count: 112340 kstat.zfs.misc.vdev_cache_stats.delegations: 3822 kstat.zfs.misc.vdev_cache_stats.hits: 342974 kstat.zfs.misc.vdev_cache_stats.misses: 170601 WBR, Alexander Shevchenko From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 23:20:02 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C872E106566B for ; Thu, 1 Oct 2009 23:20:02 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-gx0-f214.google.com (mail-gx0-f214.google.com [209.85.217.214]) by mx1.freebsd.org (Postfix) with ESMTP id 836C58FC17 for ; Thu, 1 Oct 2009 23:20:02 +0000 (UTC) Received: by gxk6 with SMTP id 6so713185gxk.13 for ; Thu, 01 Oct 2009 16:20:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type; bh=MSOzVI2EPYgyYZB4LLvZdXMG2gE1E0jNZeERMyf60WE=; b=n9MyBFlCLH+DSLFkWF/ogCvsCfeHwLlo2YlrV0RzP0BEd995Wobjd/EUJbbsDkMv3f 2ZAb2wl9xebqNxr4AyxEYjQDeRDXXu1GLGIdzxmRT5EAKCNyDl7g4b/LMwHTlmIL4kGG QAKYiy6YgnfzCkm3E9Vkbx2cIpiTD5ebo+H3k= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=wU5CIDJdd0donHqJQoT5RSLIsrRpWtOc4aIPEeyjxaMz6movTBQkdkMwRKar6KYFG7 u8nrsHVxU0Ikvqj3rnSjGRgYrNSTMySrK9upniEIumVEG/taBZKC5+AxaqFegfiUe8aZ FfwnuJMiCempinWg2+pfhuI+7wzJPHTQ5gfeg= MIME-Version: 1.0 Sender: artemb@gmail.com Received: by 10.90.245.3 with SMTP id s3mr1036917agh.43.1254439201553; Thu, 01 Oct 2009 16:20:01 -0700 (PDT) In-Reply-To: References: <8c9ae7950910011322j1a6b66fcp73615cc17ae20328@mail.gmail.com> Date: Thu, 1 Oct 2009 16:20:01 -0700 X-Google-Sender-Auth: 91188a5231aff42b Message-ID: From: Artem Belevich To: Alexander Shevchenko Content-Type: multipart/mixed; boundary=00163628407a0040900474e7e3d9 Cc: freebsd-fs@freebsd.org Subject: Re: ARC & L2ARC efficiency X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 23:20:03 -0000 --00163628407a0040900474e7e3d9 Content-Type: text/plain; charset=ISO-8859-1 Here's another script: http://www.solarisinternals.com/wiki/index.php/Arcstat Attached hacked FreeBSD version. --Artem On Thu, Oct 1, 2009 at 4:00 PM, Artem Belevich wrote: > There's a pretty useful script to present ARC stats (alas, L2ARC info > is not included) in a readable way: > http://cuddletech.com/arc_summary/ > > I've attaches somewhat hacked (and a bit outdated) version that runs on FreeBSD. > > --Artem > > > > On Thu, Oct 1, 2009 at 1:22 PM, Alexander Shevchenko wrote: >> Good time of day! >> >> How could i check the efficiency of ARC? --00163628407a0040900474e7e3d9 Content-Type: application/octet-stream; name="arcstat.pl" Content-Disposition: attachment; filename="arcstat.pl" Content-Transfer-Encoding: base64 X-Attachment-Id: f_g0a4la4q1 IyEvYmluL3BlcmwgLXcKIwojIFByaW50IG91dCBaRlMgQVJDIFN0YXRpc3RpY3MgZXhwb3J0ZWQg dmlhIGtzdGF0KDEpCiMgRm9yIGEgZGVmaW5pdGlvbiBvZiBmaWVsZHMsIG9yIHVzYWdlLCB1c2Ug YXJjdHN0YXQucGwgLXYKIwojIEF1dGhvcjogTmVlbGFrYW50aCBOYWRnaXIgaHR0cDovL2Jsb2dz LnN1bi5jb20vcmVhbG5lZWwKIyBDb21tZW50cy9RdWVzdGlvbnMvRmVlZGJhY2sgdG8gbmVlbF9z dW4uY29tIG9yIG5lZWxfZ251Lm9yZwojCiMgQ0RETCBIRUFERVIgU1RBUlQKIyAKIyBUaGUgY29u dGVudHMgb2YgdGhpcyBmaWxlIGFyZSBzdWJqZWN0IHRvIHRoZSB0ZXJtcyBvZiB0aGUKIyBDb21t b24gRGV2ZWxvcG1lbnQgYW5kIERpc3RyaWJ1dGlvbiBMaWNlbnNlLCBWZXJzaW9uIDEuMCBvbmx5 CiMgKHRoZSAiTGljZW5zZSIpLiAgWW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4g Y29tcGxpYW5jZQojIHdpdGggdGhlIExpY2Vuc2UuCiMgCiMgWW91IGNhbiBvYnRhaW4gYSBjb3B5 IG9mIHRoZSBsaWNlbnNlIGF0IHVzci9zcmMvT1BFTlNPTEFSSVMuTElDRU5TRQojIG9yIGh0dHA6 Ly93d3cub3BlbnNvbGFyaXMub3JnL29zL2xpY2Vuc2luZy4KIyBTZWUgdGhlIExpY2Vuc2UgZm9y IHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMKIyBhbmQgbGltaXRh dGlvbnMgdW5kZXIgdGhlIExpY2Vuc2UuCiMgCiMgV2hlbiBkaXN0cmlidXRpbmcgQ292ZXJlZCBD b2RlLCBpbmNsdWRlIHRoaXMgQ0RETCBIRUFERVIgaW4gZWFjaAojIGZpbGUgYW5kIGluY2x1ZGUg dGhlIExpY2Vuc2UgZmlsZSBhdCB1c3Ivc3JjL09QRU5TT0xBUklTLkxJQ0VOU0UuCiMgSWYgYXBw bGljYWJsZSwgYWRkIHRoZSBmb2xsb3dpbmcgYmVsb3cgdGhpcyBDRERMIEhFQURFUiwgd2l0aCB0 aGUKIyBmaWVsZHMgZW5jbG9zZWQgYnkgYnJhY2tldHMgIltdIiByZXBsYWNlZCB3aXRoIHlvdXIg b3duIGlkZW50aWZ5aW5nCiMgaW5mb3JtYXRpb246IFBvcnRpb25zIENvcHlyaWdodCBbeXl5eV0g W25hbWUgb2YgY29weXJpZ2h0IG93bmVyXQojIAojIENEREwgSEVBREVSIEVORAojCiMKIyBGaWVs ZHMgaGF2ZSBhIGZpeGVkIHdpZHRoLiBFdmVyeSBpbnRlcnZhbCwgd2UgZmlsbCB0aGUgInYiCiMg aGFzaCB3aXRoIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlICh2W2ZpZWxkXT12YWx1ZSkgdXNpbmcg Y2FsY3VsYXRlKCkuIAojIEBoZHIgaXMgdGhlIGFycmF5IG9mIGZpZWxkcyB0aGF0IG5lZWRzIHRv IGJlIHByaW50ZWQsIHNvIHdlCiMganVzdCBpdGVyYXRlIG92ZXIgdGhpcyBhcnJheSBhbmQgcHJp bnQgdGhlIHZhbHVlcyB1c2luZyBvdXIgcHJldHR5IHByaW50ZXIuCgp1c2Ugc3RyaWN0Owp1c2Ug UE9TSVggcXcoc3RyZnRpbWUpOwojdXNlIFN1bjo6U29sYXJpczo6S3N0YXQ7CnVzZSBHZXRvcHQ6 Okxvbmc7CnVzZSBJTzo6SGFuZGxlOwoKbXkgJWNvbHMgPSAoIyBIRFIgPT4gW1NpemUsIERlc2Ny aXB0aW9uXQoJIlRpbWUiCT0+WzgsICJUaW1lIl0sCgkiaGl0cyIJPT5bNCwgIkFyYyByZWFkcyBw ZXIgc2Vjb25kIl0sCgkibWlzcyIJPT5bNCwgIkFyYyBtaXNzZXMgcGVyIHNlY29uZCJdLAoJInJl YWQiCT0+WzQsICJUb3RhbCBBcmMgYWNjZXNzZXMgcGVyIHNlY29uZCJdLAoJIkhpdCUiCT0+WzQs ICJBcmMgSGl0IHBlcmNlbnRhZ2UiXSwKCSJtaXNzJSIJPT5bNSwgIkFyYyBtaXNzIHBlcmNlbnRh Z2UiXSwKCSJkaGl0Igk9Pls0LCAiRGVtYW5kIERhdGEgaGl0cyBwZXIgc2Vjb25kIl0sCgkiZG1p cyIJPT5bNCwgIkRlbWFuZCBEYXRhIG1pc3NlcyBwZXIgc2Vjb25kIl0sCgkiZGglIgk9PlszLCAi RGVtYW5kIERhdGEgaGl0IHBlcmNlbnRhZ2UiXSwKCSJkbSUiCT0+WzMsICJEZW1hbmQgRGF0YSBt aXNzIHBlcmNlbnRhZ2UiXSwKCSJwaGl0Igk9Pls0LCAiUHJlZmV0Y2ggaGl0cyBwZXIgc2Vjb25k Il0sCgkicG1pcyIJPT5bNCwgIlByZWZldGNoIG1pc3NlcyBwZXIgc2Vjb25kIl0sCgkicGglIgk9 PlszLCAiUHJlZmV0Y2ggaGl0cyBwZXJjZW50YWdlIl0sCgkicG0lIgk9PlszLCAiUHJlZmV0Y2gg bWlzcyBwZXJjZW50YWdlIl0sCgkibWhpdCIJPT5bNCwgIk1ldGFkYXRhIGhpdHMgcGVyIHNlY29u ZCJdLAoJIm1taXMiCT0+WzQsICJNZXRhZGF0YSBtaXNzZXMgcGVyIHNlY29uZCJdLAoJIm1yZWFk Igk9Pls1LCAiTWV0YWRhdGEgYWNjZXNzZXMgcGVyIHNlY29uZCJdLAoJIm1oJSIJPT5bMywgIk1l dGFkYXRhIGhpdCBwZXJjZW50YWdlIl0sCgkibW0lIgk9PlszLCAiTWV0YWRhdGEgbWlzcyBwZXJj ZW50YWdlIl0sCgkiYXJjc3oiCT0+WzUsICJBcmMgU2l6ZSJdLAoJImMiIAk9Pls0LCAiQXJjIFRh cmdldCBTaXplIl0sCgkibWZ1IiAJPT5bNCwgIk1GVSBMaXN0IGhpdHMgcGVyIHNlY29uZCJdLAoJ Im1ydSIgCT0+WzQsICJNUlUgTGlzdCBoaXRzIHBlciBzZWNvbmQiXSwKCSJtZnVnIiAJPT5bNCwg Ik1GVSBHaG9zdCBMaXN0IGhpdHMgcGVyIHNlY29uZCJdLAoJIm1ydWciIAk9Pls0LCAiTVJVIEdo b3N0IExpc3QgaGl0cyBwZXIgc2Vjb25kIl0sCgkiZXNraXAiCT0+WzUsICJldmljdF9za2lwIHBl ciBzZWNvbmQiXSwKCSJtdHhtaXMiPT5bNiwgIm11dGV4X21pc3MgcGVyIHNlY29uZCJdLAoJInJt aXMiCT0+WzQsICJyZWN5Y2xlX21pc3MgcGVyIHNlY29uZCJdLAoJImRyZWFkIgk9Pls1LCAiRGVt YW5kIGRhdGEgYWNjZXNzZXMgcGVyIHNlY29uZCJdLAoJInByZWFkIgk9Pls1LCAiUHJlZmV0Y2gg YWNjZXNzZXMgcGVyIHNlY29uZCJdLAopOwpteSAldj0oKTsKbXkgQGhkciA9IHF3KFRpbWUgcmVh ZCBtaXNzIG1pc3MlIGRtaXMgZG0lIHBtaXMgcG0lIG1taXMgbW0lIGFyY3N6IGMpOwpteSBAeGhk ciA9IHF3KFRpbWUgbWZ1IG1ydSBtZnVnIG1ydWcgZXNraXAgbXR4bWlzIHJtaXMgZHJlYWQgcHJl YWQgcmVhZCk7Cm15ICRpbnQgPSAxOwkJIyBQcmludCBzdGF0cyBldmVyeSAxIHNlY29uZCBieSBk ZWZhdWx0Cm15ICRjb3VudCA9IDA7CQkjIFByaW50IHN0YXRzIGZvcmV2ZXIKbXkgJGhkcl9pbnRy ID0gMjA7CSMgUHJpbnQgaGVhZGVyIGV2ZXJ5IDIwIGxpbmVzIG9mIG91dHB1dApteSAkb3BmaWxl ID0gIiI7Cm15ICRzZXAgPSAiICAiOwkJIyBEZWZhdWx0IHNlcGVyYXRvciBpcyAyIHNwYWNlcwpt eSAkdmVyc2lvbiA9ICIwLjEiOwpteSAkY21kID0gIlVzYWdlOiBhcmNzdGF0LnBsIFstaHZ4XSBb LWYgZmllbGRzXSBbLW8gZmlsZV0gW2ludGVydmFsIFtjb3VudF1dXG4iOwpteSAlY3VyOwpteSAl ZDsKbXkgJG91dDsKbXkgJGtzdGF0OyAjID0gU3VuOjpTb2xhcmlzOjpLc3RhdC0+bmV3KCk7ClNU RE9VVC0+YXV0b2ZsdXNoOwoKc3ViIGtzdGF0X3VwZGF0ZSB7CglteSBAayA9IGBzeXNjdGwgJ2tz dGF0Lnpmcy5taXNjLmFyY3N0YXRzJ2A7CgoJdW5kZWYgJGtzdGF0OwoKCWZvcmVhY2ggbXkgJGsg KEBrKSB7CgkgIGNob21wICRrOwoJICBteSAoJG5hbWUsJHZhbHVlKSA9IHNwbGl0IC86LywgJGs7 CgkgIG15IEB6ID0gc3BsaXQgL1wuLywgJG5hbWU7CgkgIG15ICRuID0gcG9wIEB6OwoJICAke2tz dGF0fS0+e3pmc30tPnswfS0+e2FyY3N0YXRzfS0+eyRufSA9ICR2YWx1ZTsKCX0KfQoKc3ViIGRl dGFpbGVkX3VzYWdlIHsKCXByaW50IFNUREVSUiAiQXJjc3RhdCB2ZXJzaW9uICR2ZXJzaW9uXG4k Y21kIjsKCXByaW50IFNUREVSUiAiRmllbGQgZGVmaW5pdGlvbnMgYXJlIGFzIGZvbGxvd3NcbiI7 Cglmb3JlYWNoIG15ICRoZHIgKGtleXMgJWNvbHMpIHsKCQlwcmludCBTVERFUlIgc3ByaW50Zigi JTZzIDogJXNcbiIsICRoZHIsICRjb2xzeyRoZHJ9WzFdKTsKCX0KCXByaW50IFNUREVSUiAiXG5O b3RlOiBLPTEwXjMgTT0xMF42IEc9MTBeOSBhbmQgc28gb25cbiI7CglleGl0KDEpOwoKfQoKc3Vi IHVzYWdlIHsKCXByaW50IFNUREVSUiAiQXJjc3RhdCB2ZXJzaW9uICR2ZXJzaW9uXG4kY21kIjsK CXByaW50IFNUREVSUiAiXHQgLXggOiBQcmludCBleHRlbmRlZCBzdGF0c1xuIjsKCXByaW50IFNU REVSUiAiXHQgLWYgOiBTcGVjaWZ5IHNwZWNpZmljIGZpZWxkcyB0byBwcmludCAoc2VlIC12KVxu IjsKCXByaW50IFNUREVSUiAiXHQgLW8gOiBQcmludCBzdGF0cyB0byBmaWxlXG4iOwoJcHJpbnQg U1RERVJSICJcdCAtcyA6IFNwZWNpZnkgYSBzZXBlcmF0b3JcblxuRXhhbXBsZXM6XG4iOwoJcHJp bnQgU1RERVJSICJcdGFyY3N0YXQgLW8gL3RtcC9hLmxvZyAyIDEwXG4iOwoJcHJpbnQgU1RERVJS ICJcdGFyY3N0YXQgLXMgLCAtbyAvdG1wL2EubG9nIDIgMTBcbiI7CglwcmludCBTVERFUlIgIlx0 YXJjc3RhdCAtdlxuIjsKCXByaW50IFNUREVSUiAiXHRhcmNzdGF0IC1mIFRpbWUsSGl0JSxkaCUs cGglLG1oJVxuIjsKCWV4aXQoMSk7Cn0KCnN1YiBpbml0IHsKCW15ICRkZXNpcmVkX2NvbHM7Cglt eSAkeGZsYWcgPSAnJzsKCW15ICRoZmxhZyA9ICcnOwoJbXkgJHZmbGFnOwoJbXkgJHJlcyA9IEdl dE9wdGlvbnMoJ3gnID0+IFwkeGZsYWcsCgkJJ289cycgPT4gXCRvcGZpbGUsCgkJJ2hlbHB8aHw/ JyA9PiBcJGhmbGFnLAoJCSd2JyA9PiBcJHZmbGFnLAoJCSdzPXMnID0+IFwkc2VwLAoJCSdmPXMn ID0+IFwkZGVzaXJlZF9jb2xzKTsKCSRpbnQgPSAkQVJHVlswXSB8fCAkaW50OwoJJGNvdW50ID0g JEFSR1ZbMV0gfHwgJGNvdW50OwoJdXNhZ2UoKSBpZiAhJHJlcyBvciAkaGZsYWcgb3IgKCR4Zmxh ZyBhbmQgJGRlc2lyZWRfY29scyk7CglkZXRhaWxlZF91c2FnZSgpIGlmICR2ZmxhZzsKCUBoZHIg PSBAeGhkciBpZiAkeGZsYWc7CQkjcmVzZXQgaGVhZGVycyB0byB4aGRyCglpZiAoJGRlc2lyZWRf Y29scykgewoJCUBoZHIgPSBzcGxpdCgvWyAsXSsvLCAkZGVzaXJlZF9jb2xzKTsKCQkjIE5vdyBj aGVjayBpZiB0aGV5IGFyZSB2YWxpZCBmaWVsZHMKCQlteSBAaW52YWxpZCA9ICgpOwoJCWZvcmVh Y2ggbXkgJGVsZSAoQGhkcikgewoJCQlwdXNoKEBpbnZhbGlkLCAkZWxlKSBpZiBub3QgZXhpc3Rz KCRjb2xzeyRlbGV9KTsKCQl9CgkJaWYgKHNjYWxhciBAaW52YWxpZCA+IDApIHsKCQkJcHJpbnQg U1RERVJSICJJbnZhbGlkIGNvbHVtbiBkZWZpbml0aW9uISAtLSAiCgkJCQkuICJAaW52YWxpZFxu XG4iOwoJCQl1c2FnZSgpOwoJCX0KCX0KCWlmICgkb3BmaWxlKSB7CgkJb3Blbigkb3V0LCAiPiRv cGZpbGUiKSB8fGRpZSAiQ2Fubm90IG9wZW4gJG9wZmlsZSBmb3Igd3JpdGluZyI7CgkJJG91dC0+ YXV0b2ZsdXNoOwoJCXNlbGVjdCAkb3V0OwoJfQp9CgojIENhcHR1cmUga3N0YXQgc3RhdGlzdGlj cy4gV2UgbWFpbnRhaW4gMyBoYXNoZXMsIHByZXYsIGN1ciwgYW5kCiMgZCAoZGVsdGEpLiBBcyB0 aGVpciBuYW1lcyBpbXBseSB0aGV5IG1haW50YWluIHRoZSBwcmV2aW91cywgY3VycmVudCwKIyBh bmQgZGVsdGEgKGN1ciAtIHByZXYpIHN0YXRpc3RpY3MuCnN1YiBzbmFwX3N0YXRzIHsKCW15ICVw cmV2ID0gJWN1cjsKCWtzdGF0X3VwZGF0ZSgpOwoKCW15ICRoYXNocmVmX2N1ciA9ICRrc3RhdC0+ eyJ6ZnMifXswfXsiYXJjc3RhdHMifTsKCSVjdXIgPSAlJGhhc2hyZWZfY3VyOwoJZm9yZWFjaCBt eSAka2V5IChrZXlzICVjdXIpIHsKCQluZXh0IGlmICRrZXkgPX4gL2NsYXNzLzsKCQlpZiAoZGVm aW5lZCAkcHJldnska2V5fSkgewoJCQkkZHska2V5fSA9ICRjdXJ7JGtleX0gLSAkcHJldnska2V5 fTsKCQl9IGVsc2UgewoJCQkkZHska2V5fSA9ICRjdXJ7JGtleX07CgkJfQoJfQp9CgojIFByZXR0 eSBwcmludCBudW0uIEFyZ3VtZW50cyBhcmUgd2lkdGggYW5kIG51bQpzdWIgcHJldHR5bnVtIHsK CW15IEBzdWZmaXg9KCcgJywnSycsICdNJywgJ0cnLCAnVCcsICdQJywgJ0UnLCAnWicpOwoJbXkg JG51bSA9ICRfWzFdIHx8IDA7CglteSAkc3ogPSAkX1swXTsKCW15ICRpbmRleCA9IDA7CglyZXR1 cm4gc3ByaW50ZigiJXMiLCAkbnVtKSBpZiBub3QgJG51bSA9fiAvXlswLTlcLl0rJC87Cgl3aGls ZSAoJG51bSA+IDEwMDAgYW5kICRpbmRleCA8IDgpIHsKCQkkbnVtID0gJG51bS8xMDAwOwoJCSRp bmRleCsrOwoJfQoJcmV0dXJuIHNwcmludGYoIiUqZCIsICRzeiwgJG51bSkgaWYgKCRpbmRleCA9 PSAwKTsKCXJldHVybiBzcHJpbnRmKCIlKmQlcyIsICRzeiAtIDEsICRudW0sJHN1ZmZpeFskaW5k ZXhdKTsKfQoKc3ViIHByaW50X3ZhbHVlcyB7Cglmb3JlYWNoIG15ICRjb2wgKEBoZHIpIHsKCQlw cmludGYoIiVzJXMiLCBwcmV0dHludW0oJGNvbHN7JGNvbH1bMF0sICR2eyRjb2x9KSwgJHNlcCk7 Cgl9CglwcmludGYoIlxuIik7Cn0KCnN1YiBwcmludF9oZWFkZXIgewoJZm9yZWFjaCBteSAkY29s IChAaGRyKSB7CgkJcHJpbnRmKCIlKnMlcyIsICRjb2xzeyRjb2x9WzBdLCAkY29sLCAkc2VwKTsK CX0KCXByaW50ZigiXG4iKTsKfQoKc3ViIGNhbGN1bGF0ZSB7Cgkldj0oKTsKCSR2eyJUaW1lIn0g PSBzdHJmdGltZSgiJUg6JU06JVMiLCBsb2NhbHRpbWUpOwoJJHZ7ImhpdHMifSA9ICRkeyJoaXRz In0vJGludDsKCSR2eyJtaXNzIn0gPSAkZHsibWlzc2VzIn0vJGludDsKCSR2eyJyZWFkIn0gPSAk dnsiaGl0cyJ9ICsgJHZ7Im1pc3MifTsKCSR2eyJIaXQlIn0gPSAxMDAqJHZ7ImhpdHMifS8kdnsi cmVhZCJ9IGlmICR2eyJyZWFkIn0gPiAwOwoJJHZ7Im1pc3MlIn0gPSAxMDAgLSAkdnsiSGl0JSJ9 IGlmICR2eyJyZWFkIn0gPiAwOwoKCSR2eyJkaGl0In0gPSAoJGR7ImRlbWFuZF9kYXRhX2hpdHMi fSArICRkeyJkZW1hbmRfbWV0YWRhdGFfaGl0cyJ9KS8kaW50OwoJJHZ7ImRtaXMifSA9ICgkZHsi ZGVtYW5kX2RhdGFfbWlzc2VzIn0rJGR7ImRlbWFuZF9tZXRhZGF0YV9taXNzZXMifSkvJGludDsK CSR2eyJkcmVhZCJ9ID0gJHZ7ImRoaXQifSArICR2eyJkbWlzIn07CgkkdnsiZGglIn0gPSAxMDAq JHZ7ImRoaXQifS8kdnsiZHJlYWQifSBpZiAkdnsiZHJlYWQifSA+IDA7CgkkdnsiZG0lIn0gPSAx MDAgLSAkdnsiZGglIn0gaWYgJHZ7ImRyZWFkIn0gPiAwOwoKCSR2eyJwaGl0In09KCRkeyJwcmVm ZXRjaF9kYXRhX2hpdHMifSArICRkeyJwcmVmZXRjaF9tZXRhZGF0YV9oaXRzIn0pLyRpbnQ7Cgkk dnsicG1pcyJ9PSgkZHsicHJlZmV0Y2hfZGF0YV9taXNzZXMifQoJCSskZHsicHJlZmV0Y2hfbWV0 YWRhdGFfbWlzc2VzIn0pLyRpbnQ7CgkkdnsicHJlYWQifSA9ICR2eyJwaGl0In0gKyAkdnsicG1p cyJ9OwoJJHZ7InBoJSJ9ID0gMTAwKiR2eyJwaGl0In0vJHZ7InByZWFkIn0gaWYgJHZ7InByZWFk In0gPiAwOwoJJHZ7InBtJSJ9ID0gMTAwIC0gJHZ7InBoJSJ9IGlmICR2eyJwcmVhZCJ9ID4gMDsK CgkkdnsibWhpdCJ9PSgkZHsicHJlZmV0Y2hfbWV0YWRhdGFfaGl0cyJ9KyRkeyJkZW1hbmRfbWV0 YWRhdGFfaGl0cyJ9KS8kaW50OwoJJHZ7Im1taXMifT0oJGR7InByZWZldGNoX21ldGFkYXRhX21p c3NlcyJ9CgkJKyRkeyJkZW1hbmRfbWV0YWRhdGFfbWlzc2VzIn0pLyRpbnQ7CgkkdnsibXJlYWQi fSA9ICR2eyJtaGl0In0gKyAkdnsibW1pcyJ9OwoJJHZ7Im1oJSJ9ID0gMTAwKiR2eyJtaGl0In0v JHZ7Im1yZWFkIn0gaWYgJHZ7Im1yZWFkIn0gPiAwOwoJJHZ7Im1tJSJ9ID0gMTAwIC0gJHZ7Im1o JSJ9IGlmICR2eyJtcmVhZCJ9ID4gMDsKCgkkdnsiYXJjc3oifSA9ICRjdXJ7InNpemUifTsKCSR2 eyJjIn0gPSAkY3VyeyJjIn07CgkkdnsibWZ1In0gPSAkZHsiaGl0cyJ9LyRpbnQ7CgkkdnsibXJ1 In0gPSAkZHsibXJ1X2hpdHMifS8kaW50OwoJJHZ7Im1ydWcifSA9ICRkeyJtcnVfZ2hvc3RfaGl0 cyJ9LyRpbnQ7CgkkdnsibWZ1ZyJ9ID0gJGR7Im1ydV9naG9zdF9oaXRzIn0vJGludDsKCSR2eyJl c2tpcCJ9ID0gJGR7ImV2aWN0X3NraXAifS8kaW50OwoJJHZ7InJtaXNzIn0gPSAkZHsicmVjeWNs ZV9taXNzIn0vJGludDsKCSR2eyJtdHhtaXMifSA9ICRkeyJtdXRleF9taXNzIn0vJGludDsKfQoK c3ViIG1haW4gewoJbXkgJGkgPSAwOwoJbXkgJGNvdW50X2ZsYWcgPSAwOwoKCWluaXQoKTsKCWlm ICgkY291bnQgPiAwKSB7ICRjb3VudF9mbGFnID0gMTsgfQoJd2hpbGUgKDEpIHsKCQlwcmludF9o ZWFkZXIoKSBpZiAoJGkgPT0gMCk7CgkJc25hcF9zdGF0cygpOwoJCWNhbGN1bGF0ZSgpOwoJCXBy aW50X3ZhbHVlcygpOwoJCWxhc3QgaWYgKCRjb3VudF9mbGFnID09IDEgJiYgJGNvdW50LS0gPD0g MSk7CgkJJGkgPSAoJGkgPT0gJGhkcl9pbnRyKSA/IDAgOiAkaSsxOwoJCXNsZWVwKCRpbnQpOwoJ fQoJY2xvc2UoJG91dCkgaWYgZGVmaW5lZCAkb3V0Owp9CgombWFpbjsK --00163628407a0040900474e7e3d9-- From owner-freebsd-fs@FreeBSD.ORG Thu Oct 1 23:24:12 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E10F106566B for ; Thu, 1 Oct 2009 23:24:12 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-yx0-f171.google.com (mail-yx0-f171.google.com [209.85.210.171]) by mx1.freebsd.org (Postfix) with ESMTP id 0379D8FC15 for ; Thu, 1 Oct 2009 23:24:11 +0000 (UTC) Received: by yxe1 with SMTP id 1so617258yxe.3 for ; Thu, 01 Oct 2009 16:24:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type; bh=4x8fAO69cZMYH4NxTOlRF2PcFHmL2mpKLpc5x9xBxBA=; b=HJgjY8GgyAcmrODGS45c8bUkVkFrl96XfzoVeVQJrLQlBKJYfP8EvLLhbGyyeXZV/A R5b0+bnav0pA9iJ7Tc17S2jLbKIdq/cKCx21t0N+yBj8lms68cr19fyxnaOXMNYIY7yr legX26aEoUDqlPW8liEq2x7lFnTYkHa2EAud4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=m1AvtqZvC7NIbmN8jFx6o8GEvE+qTEXl4mhKy0Cc2X2C+ca+6Ar7Oy0g9gCcODYtBE EOIPkDlFxIL/PsYCIZfXPEPPWp7eXkPrkz9cGnold+BOc2tmB38nXBrTfZxR5OAxnbPp sYElxiQSO/Ro/VPqYuKsOLcWpjHCTtVpMHoeU= MIME-Version: 1.0 Sender: artemb@gmail.com Received: by 10.90.10.9 with SMTP id 9mr1011512agj.69.1254438003137; Thu, 01 Oct 2009 16:00:03 -0700 (PDT) In-Reply-To: <8c9ae7950910011322j1a6b66fcp73615cc17ae20328@mail.gmail.com> References: <8c9ae7950910011322j1a6b66fcp73615cc17ae20328@mail.gmail.com> Date: Thu, 1 Oct 2009 16:00:03 -0700 X-Google-Sender-Auth: 1aa82c7358e2208f Message-ID: From: Artem Belevich To: Alexander Shevchenko Content-Type: multipart/mixed; boundary=00163616438f91deb30474e79b4f Cc: freebsd-fs@freebsd.org Subject: Re: ARC & L2ARC efficiency X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Oct 2009 23:24:12 -0000 --00163616438f91deb30474e79b4f Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable There's a pretty useful script to present ARC stats (alas, L2ARC info is not included) in a readable way: http://cuddletech.com/arc_summary/ I've attaches somewhat hacked (and a bit outdated) version that runs on Fre= eBSD. --Artem On Thu, Oct 1, 2009 at 1:22 PM, Alexander Shevchenko wr= ote: > Good time of day! > > How could i check the efficiency of ARC? > Are total reads from pool equal kstat.zfs.misc.arcstats.hits + > kstat.zfs.misc.arcstats.misses, or this values are just reads from cache? > By =A0efficiency i mean reads_from_cache/(reads_from_cache+reads_from_dri= ves) > Are there any document where kstat values described? > > zpool status > =A0pool: data > =A0state: ONLINE > =A0scrub: none requested > config: > > =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM > =A0 =A0 =A0 =A0data =A0 =A0 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0da2 =A0 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0da4 =A0 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0cache > =A0 =A0 =A0 =A0 =A0da3 =A0 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > > > #sysctl kstat > kstat.zfs.misc.arcstats.hits: 282927703 > kstat.zfs.misc.arcstats.misses: 66220328 > kstat.zfs.misc.arcstats.demand_data_hits: 164374119 > kstat.zfs.misc.arcstats.demand_data_misses: 6615511 > kstat.zfs.misc.arcstats.demand_metadata_hits: 88715021 > kstat.zfs.misc.arcstats.demand_metadata_misses: 4464890 > kstat.zfs.misc.arcstats.prefetch_data_hits: 28851210 > kstat.zfs.misc.arcstats.prefetch_data_misses: 55109950 > kstat.zfs.misc.arcstats.prefetch_metadata_hits: 987353 > kstat.zfs.misc.arcstats.prefetch_metadata_misses: 29977 > kstat.zfs.misc.arcstats.mru_hits: 44560461 > kstat.zfs.misc.arcstats.mru_ghost_hits: 1493532 > kstat.zfs.misc.arcstats.mfu_hits: 211027800 > kstat.zfs.misc.arcstats.mfu_ghost_hits: 16337660 > kstat.zfs.misc.arcstats.deleted: 49112923 > kstat.zfs.misc.arcstats.recycle_miss: 9574100 > kstat.zfs.misc.arcstats.mutex_miss: 252423 > kstat.zfs.misc.arcstats.evict_skip: 2269320648 > kstat.zfs.misc.arcstats.hash_elements: 644877 > kstat.zfs.misc.arcstats.hash_elements_max: 678888 > kstat.zfs.misc.arcstats.hash_collisions: 21697862 > kstat.zfs.misc.arcstats.hash_chains: 182323 > kstat.zfs.misc.arcstats.hash_chain_max: 9 > kstat.zfs.misc.arcstats.p: 1251375616 > kstat.zfs.misc.arcstats.c: 1252817408 > kstat.zfs.misc.arcstats.c_min: 1252817408 > kstat.zfs.misc.arcstats.c_max: 10022539264 > kstat.zfs.misc.arcstats.size: 1237578176 > kstat.zfs.misc.arcstats.hdr_size: 9610640 > kstat.zfs.misc.arcstats.l2_hits: 12905801 > kstat.zfs.misc.arcstats.l2_misses: 680 > kstat.zfs.misc.arcstats.l2_feeds: 52666 > kstat.zfs.misc.arcstats.l2_rw_clash: 680 > kstat.zfs.misc.arcstats.l2_writes_sent: 41330 > kstat.zfs.misc.arcstats.l2_writes_done: 41330 > kstat.zfs.misc.arcstats.l2_writes_error: 0 > kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 62 > kstat.zfs.misc.arcstats.l2_evict_lock_retry: 53 > kstat.zfs.misc.arcstats.l2_evict_reading: 5 > kstat.zfs.misc.arcstats.l2_free_on_write: 30044 > kstat.zfs.misc.arcstats.l2_abort_lowmem: 309837 > kstat.zfs.misc.arcstats.l2_cksum_bad: 0 > kstat.zfs.misc.arcstats.l2_io_error: 0 > kstat.zfs.misc.arcstats.l2_size: 79319831552 > kstat.zfs.misc.arcstats.l2_hdr_size: 134102528 > kstat.zfs.misc.arcstats.memory_throttle_count: 112340 > kstat.zfs.misc.vdev_cache_stats.delegations: 3822 > kstat.zfs.misc.vdev_cache_stats.hits: 342974 > kstat.zfs.misc.vdev_cache_stats.misses: 170601 > > > WBR, > Alexander Shevchenko > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > --00163616438f91deb30474e79b4f Content-Type: application/octet-stream; name="arc_summary.pl" Content-Disposition: attachment; filename="arc_summary.pl" Content-Transfer-Encoding: base64 X-Attachment-Id: f_g0a3wh810 IyEvdXNyL2Jpbi9wZXJsIC13CiMKIyMgYmVuckBjdWRkbGV0ZWNoLmNvbQojIyBhcmNfc3VtbWFy eS5wbCB2MC4yCgp1c2Ugc3RyaWN0OwoKbXkgJEtzdGF0OyAjID0gU3VuOjpTb2xhcmlzOjpLc3Rh dC0+bmV3KCk7CgojIyMgU3lzdGVtIE1lbW9yeSAjIyMKbXkgJHBoeXNfcGFnZXMgPSAwOwpteSAk ZnJlZV9wYWdlcyA9IDA7Cm15ICRsb3RzZnJlZV9wYWdlcyA9IDA7Cm15ICRwYWdlc2l6ZSA9IGBz eXNjdGwgLW4gJ2h3LnBhZ2VzaXplJ2A7CgpteSAkcGh5c19tZW1vcnkgPSBgc3lzY3RsIC1uICdo dy5waHlzbWVtJ2A7CiRwaHlzX3BhZ2VzID0gICRwaHlzX21lbW9yeSAvICRwYWdlc2l6ZTsKbXkg JGZyZWVfbWVtb3J5ID0gMDsKbXkgJGxvdHNmcmVlX21lbW9yeSA9IDA7CgpwcmludCAiU3lzdGVt IE1lbW9yeTpcbiI7CnByaW50ZigiXHQgUGh5c2ljYWwgUkFNOiBcdCVkIE1CXG4iLCAkcGh5c19t ZW1vcnkgLyAxMDI0IC8gMTAyNCk7CnByaW50ZigiXHQgRnJlZSBNZW1vcnkgOiBcdCVkIE1CXG4i LCAkZnJlZV9tZW1vcnkgLyAxMDI0IC8gMTAyNCk7CnByaW50ICJcbiI7CiMjIyMjIyMjIyMjIyMj IyMjIyMjIyMjIyMjCgoKCm15IEBrID0gYHN5c2N0bCAna3N0YXQuemZzLm1pc2MuYXJjc3RhdHMn YDsKCmZvcmVhY2ggbXkgJGsgKEBrKSB7CiAgY2hvbXAgJGs7CiAgbXkgKCRuYW1lLCR2YWx1ZSkg PSBzcGxpdCAvOi8sICRrOwogIG15IEB6ID0gc3BsaXQgL1wuLywgJG5hbWU7CiAgbXkgJG4gPSBw b3AgQHo7CiAgJHtLc3RhdH0tPnt6ZnN9LT57MH0tPnthcmNzdGF0c30tPnskbn0gPSAkdmFsdWU7 Cn0KCgojIyMjIEFSQyBTaXppbmcgIyMjIyMjIyMjIyMjIyMjCm15ICRtcnVfc2l6ZSA9ICR7S3N0 YXR9LT57emZzfS0+ezB9LT57YXJjc3RhdHN9LT57cH07Cm15ICR0YXJnZXRfc2l6ZSA9ICR7S3N0 YXR9LT57emZzfS0+ezB9LT57YXJjc3RhdHN9LT57Y307Cm15ICRhcmNfbWluX3NpemUgPSAke0tz dGF0fS0+e3pmc30tPnswfS0+e2FyY3N0YXRzfS0+e2NfbWlufTsKbXkgJGFyY19tYXhfc2l6ZSA9 ICR7S3N0YXR9LT57emZzfS0+ezB9LT57YXJjc3RhdHN9LT57Y19tYXh9OwoKbXkgJGFyY19zaXpl ID0gJHtLc3RhdH0tPnt6ZnN9LT57MH0tPnthcmNzdGF0c30tPntzaXplfTsKbXkgJG1mdV9zaXpl ID0gJHt0YXJnZXRfc2l6ZX0gLSAkbXJ1X3NpemU7Cm15ICRtcnVfcGVyYyA9IDEwMCooJG1ydV9z aXplIC8gJHRhcmdldF9zaXplKTsKbXkgJG1mdV9wZXJjID0gMTAwKigkbWZ1X3NpemUgLyAkdGFy Z2V0X3NpemUpOwoKCnByaW50ICJBUkMgU2l6ZTpcbiI7CnByaW50ZigiXHQgQ3VycmVudCBTaXpl OiAgICAgICAgICAgICAlZCBNQiAoYXJjc2l6ZSlcbiIsICRhcmNfc2l6ZSAvIDEwMjQgLyAxMDI0 KTsKcHJpbnRmKCJcdCBUYXJnZXQgU2l6ZSAoQWRhcHRpdmUpOiAgICVkIE1CIChjKVxuIiwgJHRh cmdldF9zaXplIC8gMTAyNCAvIDEwMjQpOwpwcmludGYoIlx0IE1pbiBTaXplIChIYXJkIExpbWl0 KTogICAgJWQgTUIgKHpmc19hcmNfbWluKVxuIiwgJGFyY19taW5fc2l6ZSAvIDEwMjQgLyAxMDI0 KTsKcHJpbnRmKCJcdCBNYXggU2l6ZSAoSGFyZCBMaW1pdCk6ICAgICVkIE1CICh6ZnNfYXJjX21h eClcbiIsICRhcmNfbWF4X3NpemUgLyAxMDI0IC8gMTAyNCk7CgpwcmludCAiXG5BUkMgU2l6ZSBC cmVha2Rvd246XG4iOwoKcHJpbnRmKCJcdCBNb3N0IFJlY2VudGx5IFVzZWQgQ2FjaGUgU2l6ZTog XHQgJTJkJSUgXHQlZCBNQiAocClcbiIsICRtcnVfcGVyYywgJG1ydV9zaXplIC8gMTAyNCAvIDEw MjQpOwpwcmludGYoIlx0IE1vc3QgRnJlcXVlbnRseSBVc2VkIENhY2hlIFNpemU6IFx0ICUyZCUl IFx0JWQgTUIgKGMtcClcbiIsICRtZnVfcGVyYywgJG1mdV9zaXplIC8gMTAyNCAvIDEwMjQpOwpw cmludCAiXG4iOwojIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjCgojbXkgJGFyY19z aXplID0gJHtLc3RhdH0tPnt6ZnN9LT57MH0tPnthcmNzdGF0c30tPntzaXplfTsKCiAgICAgICAg CgojIyMjIyMjIEFSQyBFZmZpY2VuY3kgIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwpteSAkYXJj X2hpdHMgPSAke0tzdGF0fS0+e3pmc30tPnswfS0+e2FyY3N0YXRzfS0+e2hpdHN9OwpteSAkYXJj X21pc3NlcyA9ICR7S3N0YXR9LT57emZzfS0+ezB9LT57YXJjc3RhdHN9LT57bWlzc2VzfTsKbXkg JGFyY19hY2Nlc3Nlc190b3RhbCA9ICgkYXJjX2hpdHMgKyAkYXJjX21pc3Nlcyk7CgpteSAkYXJj X2hpdF9wZXJjID0gMTAwKigkYXJjX2hpdHMgLyAkYXJjX2FjY2Vzc2VzX3RvdGFsKTsKbXkgJGFy Y19taXNzX3BlcmMgPSAxMDAqKCRhcmNfbWlzc2VzIC8gJGFyY19hY2Nlc3Nlc190b3RhbCk7CgoK bXkgJG1mdV9oaXRzID0gJHtLc3RhdH0tPnt6ZnN9LT57MH0tPnthcmNzdGF0c30tPnttZnVfaGl0 c307Cm15ICRtcnVfaGl0cyA9ICR7S3N0YXR9LT57emZzfS0+ezB9LT57YXJjc3RhdHN9LT57bXJ1 X2hpdHN9OwpteSAkbWZ1X2dob3N0X2hpdHMgPSAke0tzdGF0fS0+e3pmc30tPnswfS0+e2FyY3N0 YXRzfS0+e21mdV9naG9zdF9oaXRzfTsKbXkgJG1ydV9naG9zdF9oaXRzID0gJHtLc3RhdH0tPnt6 ZnN9LT57MH0tPnthcmNzdGF0c30tPnttcnVfZ2hvc3RfaGl0c307Cm15ICRhbm9uX2hpdHMgPSAk YXJjX2hpdHMgLSAoJG1mdV9oaXRzICsgJG1ydV9oaXRzICsgJG1mdV9naG9zdF9oaXRzICsgJG1y dV9naG9zdF9oaXRzKTsKCm15ICRyZWFsX2hpdHMgPSAoJG1mdV9oaXRzICsgJG1ydV9oaXRzKTsK bXkgJHJlYWxfaGl0c19wZXJjID0gMTAwKigkcmVhbF9oaXRzIC8gJGFyY19hY2Nlc3Nlc190b3Rh bCk7CgojIyMgVGhlc2Ugc2hvdWxkIGJlIGJhc2VkIG9uIFRPVEFMIEhJVFMgKCRhcmNfaGl0cykK bXkgJGFub25faGl0c19wZXJjID0gMTAwKigkYW5vbl9oaXRzIC8gJGFyY19oaXRzKTsKbXkgJG1m dV9oaXRzX3BlcmMgPSAxMDAqKCRtZnVfaGl0cyAvICRhcmNfaGl0cyk7Cm15ICRtcnVfaGl0c19w ZXJjID0gMTAwKigkbXJ1X2hpdHMgLyAkYXJjX2hpdHMpOwpteSAkbWZ1X2dob3N0X2hpdHNfcGVy YyA9IDEwMCooJG1mdV9naG9zdF9oaXRzIC8gJGFyY19oaXRzKTsKbXkgJG1ydV9naG9zdF9oaXRz X3BlcmMgPSAxMDAqKCRtcnVfZ2hvc3RfaGl0cyAvICRhcmNfaGl0cyk7CgoKbXkgJGRlbWFuZF9k YXRhX2hpdHMgPSAke0tzdGF0fS0+e3pmc30tPnswfS0+e2FyY3N0YXRzfS0+e2RlbWFuZF9kYXRh X2hpdHN9OwpteSAkZGVtYW5kX21ldGFkYXRhX2hpdHMgPSAke0tzdGF0fS0+e3pmc30tPnswfS0+ e2FyY3N0YXRzfS0+e2RlbWFuZF9tZXRhZGF0YV9oaXRzfTsKbXkgJHByZWZldGNoX2RhdGFfaGl0 cyA9ICR7S3N0YXR9LT57emZzfS0+ezB9LT57YXJjc3RhdHN9LT57cHJlZmV0Y2hfZGF0YV9oaXRz fTsKbXkgJHByZWZldGNoX21ldGFkYXRhX2hpdHMgPSAke0tzdGF0fS0+e3pmc30tPnswfS0+e2Fy Y3N0YXRzfS0+e3ByZWZldGNoX21ldGFkYXRhX2hpdHN9OwoKbXkgJGRlbWFuZF9kYXRhX2hpdHNf cGVyYyA9IDEwMCooJGRlbWFuZF9kYXRhX2hpdHMgLyAkYXJjX2hpdHMpOwpteSAkZGVtYW5kX21l dGFkYXRhX2hpdHNfcGVyYyA9IDEwMCooJGRlbWFuZF9tZXRhZGF0YV9oaXRzIC8gJGFyY19oaXRz KTsKbXkgJHByZWZldGNoX2RhdGFfaGl0c19wZXJjID0gMTAwKigkcHJlZmV0Y2hfZGF0YV9oaXRz IC8gJGFyY19oaXRzKTsKbXkgJHByZWZldGNoX21ldGFkYXRhX2hpdHNfcGVyYyA9IDEwMCooJHBy ZWZldGNoX21ldGFkYXRhX2hpdHMgLyAkYXJjX2hpdHMpOwoKCm15ICRkZW1hbmRfZGF0YV9taXNz ZXMgPSAke0tzdGF0fS0+e3pmc30tPnswfS0+e2FyY3N0YXRzfS0+e2RlbWFuZF9kYXRhX21pc3Nl c307Cm15ICRkZW1hbmRfbWV0YWRhdGFfbWlzc2VzID0gJHtLc3RhdH0tPnt6ZnN9LT57MH0tPnth cmNzdGF0c30tPntkZW1hbmRfbWV0YWRhdGFfbWlzc2VzfTsKbXkgJHByZWZldGNoX2RhdGFfbWlz c2VzID0gJHtLc3RhdH0tPnt6ZnN9LT57MH0tPnthcmNzdGF0c30tPntwcmVmZXRjaF9kYXRhX21p c3Nlc307Cm15ICRwcmVmZXRjaF9tZXRhZGF0YV9taXNzZXMgPSAke0tzdGF0fS0+e3pmc30tPnsw fS0+e2FyY3N0YXRzfS0+e3ByZWZldGNoX21ldGFkYXRhX21pc3Nlc307CgpteSAkZGVtYW5kX2Rh dGFfbWlzc2VzX3BlcmMgPSAxMDAqKCRkZW1hbmRfZGF0YV9taXNzZXMgLyAkYXJjX21pc3Nlcyk7 Cm15ICRkZW1hbmRfbWV0YWRhdGFfbWlzc2VzX3BlcmMgPSAxMDAqKCRkZW1hbmRfbWV0YWRhdGFf bWlzc2VzIC8gJGFyY19taXNzZXMpOwpteSAkcHJlZmV0Y2hfZGF0YV9taXNzZXNfcGVyYyA9IDEw MCooJHByZWZldGNoX2RhdGFfbWlzc2VzIC8gJGFyY19taXNzZXMpOwpteSAkcHJlZmV0Y2hfbWV0 YWRhdGFfbWlzc2VzX3BlcmMgPSAxMDAqKCRwcmVmZXRjaF9tZXRhZGF0YV9taXNzZXMgLyAkYXJj X21pc3Nlcyk7CgpteSAkcHJlZmV0Y2hfZGF0YV90b3RhbCA9ICgkcHJlZmV0Y2hfZGF0YV9oaXRz ICsgJHByZWZldGNoX2RhdGFfbWlzc2VzKTsKbXkgJHByZWZldGNoX2RhdGFfcGVyYyA9ICIwMCI7 CmlmICgkcHJlZmV0Y2hfZGF0YV90b3RhbCA+IDAgKSB7CiAgICAgICAgJHByZWZldGNoX2RhdGFf cGVyYyA9IDEwMCooJHByZWZldGNoX2RhdGFfaGl0cyAvICRwcmVmZXRjaF9kYXRhX3RvdGFsKTsK fQoKbXkgJGRlbWFuZF9kYXRhX3RvdGFsID0gKCRkZW1hbmRfZGF0YV9oaXRzICsgJGRlbWFuZF9k YXRhX21pc3Nlcyk7Cm15ICRkZW1hbmRfZGF0YV9wZXJjID0gMTAwKigkZGVtYW5kX2RhdGFfaGl0 cyAvICRkZW1hbmRfZGF0YV90b3RhbCk7CgoKcHJpbnQgIkFSQyBFZmZpY2VuY3k6XG4iOwpwcmlu dGYoIlx0IENhY2hlIEFjY2VzcyBUb3RhbDogICAgICAgIFx0ICVkXG4iLCAkYXJjX2FjY2Vzc2Vz X3RvdGFsKTsKcHJpbnRmKCJcdCBDYWNoZSBIaXQgUmF0aW86ICAgICAgJTJkJSVcdCAlZCAgIFx0 W0RlZmluZWQgU3RhdGUgZm9yIGJ1ZmZlcl1cbiIsICRhcmNfaGl0X3BlcmMsICRhcmNfaGl0cyk7 CnByaW50ZigiXHQgQ2FjaGUgTWlzcyBSYXRpbzogICAgICUyZCUlXHQgJWQgICBcdFtVbmRlZmlu ZWQgU3RhdGUgZm9yIEJ1ZmZlcl1cbiIsICRhcmNfbWlzc19wZXJjLCAkYXJjX21pc3Nlcyk7CnBy aW50ZigiXHQgUkVBTCBIaXQgUmF0aW86ICAgICAgICUyZCUlXHQgJWQgICBcdFtNUlUvTUZVIEhp dHMgT25seV1cbiIsICRyZWFsX2hpdHNfcGVyYywgJHJlYWxfaGl0cyk7CnByaW50ICJcbiI7CnBy aW50ZigiXHQgRGF0YSBEZW1hbmQgICBFZmZpY2llbmN5OiAgICAlMmQlJVxuIiwgJGRlbWFuZF9k YXRhX3BlcmMpOwppZiAoJHByZWZldGNoX2RhdGFfdG90YWwgPT0gMCl7IAogICAgICAgIHByaW50 ZigiXHQgRGF0YSBQcmVmZXRjaCBFZmZpY2llbmN5OiAgICBESVNBQkxFRCAoemZzX3ByZWZldGNo X2Rpc2FibGUpXG4iKTsKfSBlbHNlIHsKICAgICAgICBwcmludGYoIlx0IERhdGEgUHJlZmV0Y2gg RWZmaWNpZW5jeTogICAgJTJkJSVcbiIsICRwcmVmZXRjaF9kYXRhX3BlcmMpOwp9CnByaW50ICJc biI7CgoKcHJpbnQgIlx0Q0FDSEUgSElUUyBCWSBDQUNIRSBMSVNUOlxuIjsKaWYgKCAkYW5vbl9o aXRzIDwgMSApewogICAgICAgIHByaW50ZigiXHQgIEFub246ICAgICAgICAgICAgICAgICAgICAg ICAtLSUlIFx0IENvdW50ZXIgUm9sbGVkLlxuIik7Cn0gZWxzZSB7CiAgICAgICAgcHJpbnRmKCJc dCAgQW5vbjogICAgICAgICAgICAgICAgICAgICAgICUyZCUlIFx0ICVkICAgICAgICAgICAgXHRb IE5ldyBDdXN0b21lciwgRmlyc3QgQ2FjaGUgSGl0IF1cbiIsICRhbm9uX2hpdHNfcGVyYywgJGFu b25faGl0cyk7Cn0KcHJpbnRmKCJcdCAgTW9zdCBSZWNlbnRseSBVc2VkOiAgICAgICAgICUyZCUl IFx0ICVkIChtcnUpICAgICAgXHRbIFJldHVybiBDdXN0b21lciBdXG4iLCAkbXJ1X2hpdHNfcGVy YywgJG1ydV9oaXRzKTsKcHJpbnRmKCJcdCAgTW9zdCBGcmVxdWVudGx5IFVzZWQ6ICAgICAgICUy ZCUlIFx0ICVkIChtZnUpICAgICAgXHRbIEZyZXF1ZW50IEN1c3RvbWVyIF1cbiIsICRtZnVfaGl0 c19wZXJjLCAkbWZ1X2hpdHMpOwpwcmludGYoIlx0ICBNb3N0IFJlY2VudGx5IFVzZWQgR2hvc3Q6 ICAgJTJkJSUgXHQgJWQgKG1ydV9naG9zdClcdFsgUmV0dXJuIEN1c3RvbWVyIEV2aWN0ZWQsIE5v dyBCYWNrIF1cbiIsICRtcnVfZ2hvc3RfaGl0c19wZXJjLCAkbXJ1X2dob3N0X2hpdHMpOwpwcmlu dGYoIlx0ICBNb3N0IEZyZXF1ZW50bHkgVXNlZCBHaG9zdDogJTJkJSUgXHQgJWQgKG1mdV9naG9z dClcdFsgRnJlcXVlbnQgQ3VzdG9tZXIgRXZpY3RlZCwgTm93IEJhY2sgXVxuIiwgJG1mdV9naG9z dF9oaXRzX3BlcmMsICRtZnVfZ2hvc3RfaGl0cyk7CgpwcmludCAiXHRDQUNIRSBISVRTIEJZIERB VEEgVFlQRTpcbiI7CnByaW50ZigiXHQgIERlbWFuZCBEYXRhOiAgICAgICAgICAgICAgICAlMmQl JSBcdCAlZCBcbiIsICRkZW1hbmRfZGF0YV9oaXRzX3BlcmMsICRkZW1hbmRfZGF0YV9oaXRzKTsK cHJpbnRmKCJcdCAgUHJlZmV0Y2ggRGF0YTogICAgICAgICAgICAgICUyZCUlIFx0ICVkIFxuIiwg JHByZWZldGNoX2RhdGFfaGl0c19wZXJjLCAkcHJlZmV0Y2hfZGF0YV9oaXRzKTsKcHJpbnRmKCJc dCAgRGVtYW5kIE1ldGFkYXRhOiAgICAgICAgICAgICUyZCUlIFx0ICVkIFxuIiwgJGRlbWFuZF9t ZXRhZGF0YV9oaXRzX3BlcmMsICRkZW1hbmRfbWV0YWRhdGFfaGl0cyk7CnByaW50ZigiXHQgIFBy ZWZldGNoIE1ldGFkYXRhOiAgICAgICAgICAlMmQlJSBcdCAlZCBcbiIsICRwcmVmZXRjaF9tZXRh ZGF0YV9oaXRzX3BlcmMsICRwcmVmZXRjaF9tZXRhZGF0YV9oaXRzKTsKCnByaW50ICJcdENBQ0hF IE1JU1NFUyBCWSBEQVRBIFRZUEU6XG4iOwpwcmludGYoIlx0ICBEZW1hbmQgRGF0YTogICAgICAg ICAgICAgICAgJTJkJSUgXHQgJWQgXG4iLCAkZGVtYW5kX2RhdGFfbWlzc2VzX3BlcmMsICRkZW1h bmRfZGF0YV9taXNzZXMpOwpwcmludGYoIlx0ICBQcmVmZXRjaCBEYXRhOiAgICAgICAgICAgICAg JTJkJSUgXHQgJWQgXG4iLCAkcHJlZmV0Y2hfZGF0YV9taXNzZXNfcGVyYywgJHByZWZldGNoX2Rh dGFfbWlzc2VzKTsKcHJpbnRmKCJcdCAgRGVtYW5kIE1ldGFkYXRhOiAgICAgICAgICAgICUyZCUl IFx0ICVkIFxuIiwgJGRlbWFuZF9tZXRhZGF0YV9taXNzZXNfcGVyYywgJGRlbWFuZF9tZXRhZGF0 YV9taXNzZXMpOwpwcmludGYoIlx0ICBQcmVmZXRjaCBNZXRhZGF0YTogICAgICAgICAgJTJkJSUg XHQgJWQgXG4iLCAkcHJlZmV0Y2hfbWV0YWRhdGFfbWlzc2VzX3BlcmMsICRwcmVmZXRjaF9tZXRh ZGF0YV9taXNzZXMpOwoKcHJpbnQgIi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLVxuIgojIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj IyMjIwo= --00163616438f91deb30474e79b4f-- From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 07:32:44 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 25AD71065676; Fri, 2 Oct 2009 07:32:44 +0000 (UTC) (envelope-from trasz@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id F15B98FC22; Fri, 2 Oct 2009 07:32:43 +0000 (UTC) Received: from freefall.freebsd.org (trasz@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n927WhGd036441; Fri, 2 Oct 2009 07:32:43 GMT (envelope-from trasz@freefall.freebsd.org) Received: (from trasz@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n927Wh1u036437; Fri, 2 Oct 2009 07:32:43 GMT (envelope-from trasz) Date: Fri, 2 Oct 2009 07:32:43 GMT Message-Id: <200910020732.n927Wh1u036437@freefall.freebsd.org> To: trasz@FreeBSD.org, freebsd-fs@FreeBSD.org, trasz@FreeBSD.org From: trasz@FreeBSD.org Cc: Subject: Re: kern/133373: [zfs] umass attachment causes ZFS checksum errors, data loss X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 07:32:44 -0000 Synopsis: [zfs] umass attachment causes ZFS checksum errors, data loss Responsible-Changed-From-To: freebsd-fs->trasz Responsible-Changed-By: trasz Responsible-Changed-When: Fri Oct 2 07:32:43 UTC 2009 Responsible-Changed-Why: I'll take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=133373 From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 07:59:10 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0D3AF10656B3 for ; Fri, 2 Oct 2009 07:59:10 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id D7D0D8FC1E for ; Fri, 2 Oct 2009 07:59:08 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id E47AF13564E; Fri, 2 Oct 2009 09:59:06 +0200 (CEST) X-CRM114-Version: 20090423-BlameSteveJobs ( TRE 0.7.6 (BSD) ) MF-ACE0E1EA [pR: 22.1521] X-CRM114-CacheID: sfid-20091002_09590_F93EDD13 X-CRM114-Status: Good ( pR: 22.1521 ) Message-ID: <4AC5B2C7.2000200@fsn.hu> Date: Fri, 02 Oct 2009 09:59:03 +0200 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org, Kip Macy References: <4AC1E540.9070001@fsn.hu> In-Reply-To: <4AC1E540.9070001@fsn.hu> X-Stationery: 0.4.10 Content-Type: text/plain; charset=ISO-8859-2; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.3 (people.fsn.hu); Fri, 02 Oct 2009 09:59:04 +0200 (CEST) Cc: Subject: Re: ARC size constantly shrinks, then ZFS slows down extremely X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 07:59:10 -0000 On 09/29/09 12:45, Attila Nagy wrote: > I'm using FreeBSD 8 (previously 7) on a machine with a lot of disks > and 32 GB RAM. With 7.x it ran very well for about 50 days, but > suddenly every operation have slowed down. > gstat showed that the disks are working a lot more than usual the > zpool/zfs was pretty unusable. > > I've rebooted the machine then with FreeBSD 8 in the hope the new ZFS > fixes will correct this issue (no 50 days have passed since then, so I > don't know yet) and started to monitor ZFS's statistics. > > It seems that after a reboot, the ARC size starts to grow, then > something flips the switch and it changes to shrinking, instead of > maintaining the size. > > Please see the pictures here: > http://people.fsn.hu/~bra/freebsd/20090929-zfs-arcsize/ > > Before the 27th, the machine ran FreeBSD 7, after that date it runs 8. > > As you can see, no user process tooks the memory, so I don't know why > the ARC size grows first and then start to decrease. > > Could it be that the ARC size decreases such a big amount that it > effectively disappears and this causes the IO activity go up and kill > the machine? I've upgraded another machine from an older 8-CURRENT to 8-STABLE. It has low memory (1GB) and it's i386. The above symptoms can be triggered very easily: if I do an IMAP search on a lot of mailboxes (which I do regularly), about 10 minutes needed for the IMAP server to become completely inaccessible. The machine runs fine, but every operation of the ZFS pool take ages. According to gstat there is only a very minimal disk activity. The machine can't even be rebooted, at least not in ten minutes (reboot, wait 10 minutes, nearly nothing happens, reboot -qn makes the machine disappear from the net, but it doesn't restart). Backing out this change from the 8-STABLE kernel: http://svn.freebsd.org/viewvc/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=191901&r2=191902 makes it survive about half and hour of IMAP searching. Of course only time will tell whether this helps in the long run, but so far 10/10 tries succeeded to kill the machine with this method... According to this, I would say that this change makes things worse even on low memory, i386 (1G RAM) and "there's a plenty of RAM" (32 G) amd64 servers. From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 13:47:33 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A80B71065670 for ; Fri, 2 Oct 2009 13:47:33 +0000 (UTC) (envelope-from gerrit@pmp.uni-hannover.de) Received: from mrelay1.uni-hannover.de (mrelay1.uni-hannover.de [130.75.2.106]) by mx1.freebsd.org (Postfix) with ESMTP id 335C58FC16 for ; Fri, 2 Oct 2009 13:47:32 +0000 (UTC) Received: from www.pmp.uni-hannover.de (www.pmp.uni-hannover.de [130.75.117.2]) by mrelay1.uni-hannover.de (8.14.2/8.14.2) with ESMTP id n92DlTn6007474; Fri, 2 Oct 2009 15:47:31 +0200 Received: from pmp.uni-hannover.de (theq.pmp.uni-hannover.de [130.75.117.4]) by www.pmp.uni-hannover.de (Postfix) with SMTP id D790B24; Fri, 2 Oct 2009 15:47:29 +0200 (CEST) Date: Fri, 2 Oct 2009 15:47:29 +0200 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: Rick Macklem Message-Id: <20091002154729.06fbcefc.gerrit@pmp.uni-hannover.de> In-Reply-To: References: <20090917094412.962e8729.gerrit@pmp.uni-hannover.de> <20090918091435.465bfc1e.gerrit@pmp.uni-hannover.de> Organization: Albert-Einstein-Institut (MPI =?ISO-8859-1?Q?f=FCr?= Gravitationsphysik & IGP =?ISO-8859-1?Q?Universit=E4t?= Hannover) X-Mailer: Sylpheed 2.4.2 (GTK+ 2.10.12; i386-portbld-freebsd6.1) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-PMX-Version: 5.5.5.374460, Antispam-Engine: 2.7.1.369594, Antispam-Data: 2009.10.2.133625 Cc: freebsd-fs@freebsd.org Subject: Re: Fw: Linux/KDE and NFS locking on 7-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 13:47:33 -0000 On Fri, 18 Sep 2009 10:56:59 -0400 (EDT) Rick Macklem wrote about Re: Fw: Linux/KDE and NFS locking on 7-stable: First of all, thanks to Rick for his explatations. RM> > RM> I believe setting the following in the server's /etc/rc.conf RM> > RM> and rebooting the server (or just killing off lockd on the RM> > RM> server), combined with "nolock" as you have on the above Linux RM> > RM> mount, might work ok: RM> > RM> rpc_lockd_enable="NO" RM> > RM> rpc_statd_enable="NO" RM> > RM> > I did not try that so far, as there are some clients which seem to RM> > work fine with locking. Well, in the meantime I tried more or less each and every combination of client and server setting I could think of. My first result is that locking almost every time causes troubles, because is does not work over NAT. However, even putting the server in the same subnet and using it without NAT did not make everything work with locking. Right now it looks like after booting the client KDE logins *never* work (all other nfs stuff is fine, though). After a failed login, I can umount and remount the home dir and try to login again. After doing so for some time, it suddenly works (and continues to work) obviously regardless of mount options being changed. Turning off lockd and statd on the server side does not influence this behaviour. Does anyone here have an idea how I can get a working client from the beginning on? Also, I do not even know if this is some kind of kde problem or rather a nfs problem. After all, it is rather annoying do have to login and umount/remount several times before everything works. :-) cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 14:47:26 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A2A391065670 for ; Fri, 2 Oct 2009 14:47:26 +0000 (UTC) (envelope-from gerrit@pmp.uni-hannover.de) Received: from mrelay1.uni-hannover.de (mrelay1.uni-hannover.de [130.75.2.106]) by mx1.freebsd.org (Postfix) with ESMTP id 107578FC18 for ; Fri, 2 Oct 2009 14:47:25 +0000 (UTC) Received: from www.pmp.uni-hannover.de (www.pmp.uni-hannover.de [130.75.117.2]) by mrelay1.uni-hannover.de (8.14.2/8.14.2) with ESMTP id n92ElNqO009647; Fri, 2 Oct 2009 16:47:24 +0200 Received: from pmp.uni-hannover.de (arc.pmp.uni-hannover.de [130.75.117.1]) by www.pmp.uni-hannover.de (Postfix) with SMTP id 599D04F; Fri, 2 Oct 2009 16:47:23 +0200 (CEST) Date: Fri, 2 Oct 2009 16:47:23 +0200 From: Gerrit =?ISO-8859-1?Q?K=FChn?= To: Gerrit =?ISO-8859-1?Q?K=FChn?= Message-Id: <20091002164723.5484c2d1.gerrit@pmp.uni-hannover.de> In-Reply-To: <20091002154729.06fbcefc.gerrit@pmp.uni-hannover.de> References: <20090917094412.962e8729.gerrit@pmp.uni-hannover.de> <20090918091435.465bfc1e.gerrit@pmp.uni-hannover.de> <20091002154729.06fbcefc.gerrit@pmp.uni-hannover.de> Organization: Albert-Einstein-Institut (MPI =?ISO-8859-1?Q?f=FCr?= Gravitationsphysik & IGP =?ISO-8859-1?Q?Universit=E4t?= Hannover) X-Mailer: Sylpheed 2.4.8 (GTK+ 2.12.11; i386-portbld-freebsd7.0) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-PMX-Version: 5.5.5.374460, Antispam-Engine: 2.7.1.369594, Antispam-Data: 2009.10.2.143631 Cc: freebsd-fs@freebsd.org Subject: Re: Fw: Linux/KDE and NFS locking on 7-stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 14:47:26 -0000 On Fri, 2 Oct 2009 15:47:29 +0200 Gerrit K=FChn wrote about Re: Fw: Linux/KDE and NFS locking on 7-stable: GK> Does anyone here have an idea how I can get a working client from the GK> beginning on? Also, I do not even know if this is some kind of kde GK> problem or rather a nfs problem. After all, it is rather annoying do GK> have to login and umount/remount several times before everything GK> works. :-) I have one more thing to add: as a last resort, I set the home dirs mount back to nfsvers=3D2 (together with tcp and nolock). This seems to work fine (although nfs2 is probably not a good idea these days - I guess I am looking forward to 8.0 including a nfs4 server :-). cu Gerrit From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 16:04:09 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 29E37106568F for ; Fri, 2 Oct 2009 16:04:09 +0000 (UTC) (envelope-from aaron@goflexitllc.com) Received: from mail.goflexitllc.com (mail.goflexitllc.com [70.38.81.12]) by mx1.freebsd.org (Postfix) with ESMTP id C24968FC08 for ; Fri, 2 Oct 2009 16:04:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=goflexitllc.com; h=message-id:date:from:mime-version:to:subject:content-type; s= zeta; bh=MqwjRFKRhbVnMoz21i/VWGJwZkY=; b=DGi2OehL2u5fBfxu7YKxDac CtJtJKFtf3POwmRpJRaW8u5lR15+K8A1YDWLXzdNc5SWLM9uisT7O80UqlOFWJ+c SMqlzvIGa2zDr5ngjMdTnX3fHw9Wrf6yQgM6LyfFu DomainKey-Signature: a=rsa-sha1; c=nofws; d=goflexitllc.com; h= message-id:date:from:mime-version:to:subject:content-type; q= dns; s=zeta; b=TMGR5AZ13svauLG4Phc46xZq8lVY9k8WQh4XzO+hNpOrsNfd+ CtXJzman/nG3wtPFC+9RBQYyNc9nPY5VVS4JaEWp3Etqzp7r572rJayPYhSGM0hY NWQDFYryaTvzxNM Received: (qmail 25736 invoked by uid 89); 2 Oct 2009 16:04:07 -0000 Received: (simscan 1.4.1 ppid 25726 pid 25733 t 0.1551s) (scanners: regex: 1.4.1 attach: 1.4.1 clamav: 0.95.2/m:51/d:9840); 02 Oct 0109 16:04:07 -0000 Received: from temp4.wavelinx.net (HELO ?172.16.1.128?) (aaron@goflexitllc.com@69.27.151.4) by mail.goflexitllc.com with ESMTPA; 2 Oct 2009 16:04:07 -0000 Message-ID: <4AC62471.1080009@goflexitllc.com> Date: Fri, 02 Oct 2009 11:04:01 -0500 From: Aaron Hurt User-Agent: Thunderbird 2.0.0.22 (X11/20090719) MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: multipart/mixed; boundary="------------030007000004000709060008" X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: ZFS I/O Error - unable to import/mount pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 16:04:09 -0000 This is a multi-part message in MIME format. --------------030007000004000709060008 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit I have a rather small ZFS pool on a 7-STABLE machine composed of 4 disks in raidz-1. One of the drives was giving me fits and acting flaky so I got a warranty replacement for it, however that drive as well as another both started giving DMA timeout/LBA write errors simultaneously lastnight. I was able to bring the machine back up after it locked (no crash/panic it just locked up) and do a scrub to get the pool online. I then powered down and disconnected the drive that was giving the majority of the errors. When the system came back up it says the pool is faulted, re-attaching the disk I removed does not resolve the fault and the raidz mentions corrupt metadata. I though I might be able to export/import the array but that also failed. The array did export but now refuses to import saying I/O error. If anyone could give me some insight on how I could get this pool back online just long enough to snag the data I would be really appreciative. I will also be more than happy to provide any other detailed information you may need just let me know. Thank You, -- Aaron Hurt Managing Partner Flex I.T., LLC 611 Commerce Street Suite 3117 Nashville, TN 37203 Phone: 615.438.7101 E-mail: aaron@goflexitllc.com --------------030007000004000709060008-- From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 16:13:58 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 35BF31065670; Fri, 2 Oct 2009 16:13:58 +0000 (UTC) (envelope-from jamie@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 0C0B18FC19; Fri, 2 Oct 2009 16:13:58 +0000 (UTC) Received: from freefall.freebsd.org (jamie@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n92GDuo4056568; Fri, 2 Oct 2009 16:13:56 GMT (envelope-from jamie@freefall.freebsd.org) Received: (from jamie@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n92GDu8h056564; Fri, 2 Oct 2009 16:13:56 GMT (envelope-from jamie) Date: Fri, 2 Oct 2009 16:13:56 GMT Message-Id: <200910021613.n92GDu8h056564@freefall.freebsd.org> To: ler@lerctr.org, jamie@FreeBSD.org, freebsd-fs@FreeBSD.org From: jamie@FreeBSD.org Cc: Subject: Re: kern/139198: [nfs] Page Fault out of NLM X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 16:13:58 -0000 Synopsis: [nfs] Page Fault out of NLM State-Changed-From-To: open->closed State-Changed-By: jamie State-Changed-When: Fri Oct 2 16:13:00 UTC 2009 State-Changed-Why: Fixed in r197667. http://www.freebsd.org/cgi/query-pr.cgi?pr=139198 From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 18:45:44 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 79BAF1065670; Fri, 2 Oct 2009 18:45:44 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello087206049004.chello.pl [87.206.49.4]) by mx1.freebsd.org (Postfix) with ESMTP id C29D78FC18; Fri, 2 Oct 2009 18:45:43 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id E24F245C98; Fri, 2 Oct 2009 20:45:40 +0200 (CEST) Received: from localhost (chello087206049004.chello.pl [87.206.49.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 73AC445684; Fri, 2 Oct 2009 20:45:27 +0200 (CEST) Date: Fri, 2 Oct 2009 20:45:26 +0200 From: Pawel Jakub Dawidek To: Attila Nagy Message-ID: <20091002184526.GA1660@garage.freebsd.pl> References: <4AC1E540.9070001@fsn.hu> <4AC5B2C7.2000200@fsn.hu> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="jRHKVT23PllUwdXP" Content-Disposition: inline In-Reply-To: <4AC5B2C7.2000200@fsn.hu> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@FreeBSD.org Subject: Re: ARC size constantly shrinks, then ZFS slows down extremely X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 18:45:44 -0000 --jRHKVT23PllUwdXP Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Oct 02, 2009 at 09:59:03AM +0200, Attila Nagy wrote: > Backing out this change from the 8-STABLE kernel: > http://svn.freebsd.org/viewvc/base/head/sys/cddl/contrib/opensolaris/uts/= common/fs/zfs/arc.c?r1=3D191901&r2=3D191902 >=20 > makes it survive about half and hour of IMAP searching. Of course only=20 > time will tell whether this helps in the long run, but so far 10/10=20 > tries succeeded to kill the machine with this method... Could you try this patch: http://people.freebsd.org/~pjd/patches/arc.c.4.patch --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --jRHKVT23PllUwdXP Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFKxkpGForvXbEpPzQRAqcCAKC/xaM47VeLUZzdX5iaDzoVpGa/pwCg2PQ+ eit4Hi2t+J05XzBwMUMOj0o= =9Av1 -----END PGP SIGNATURE----- --jRHKVT23PllUwdXP-- From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 20:50:03 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B5BCC1065692 for ; Fri, 2 Oct 2009 20:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A4C988FC23 for ; Fri, 2 Oct 2009 20:50:03 +0000 (UTC) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n92Ko30V036604 for ; Fri, 2 Oct 2009 20:50:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n92Ko3hR036603; Fri, 2 Oct 2009 20:50:03 GMT (envelope-from gnats) Date: Fri, 2 Oct 2009 20:50:03 GMT Message-Id: <200910022050.n92Ko3hR036603@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Gleb Kurtsou Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Gleb Kurtsou List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 20:50:03 -0000 The following reply was made to PR kern/127213; it has been noted by GNATS. From: Gleb Kurtsou To: bug-followup@FreeBSD.org, citrin@citrin.ru Cc: Subject: Re: kern/127213: [tmpfs] sendfile on tmpfs data corruption Date: Fri, 2 Oct 2009 23:48:29 +0300 --GvXjxJ+pjyke8COw Content-Type: text/plain; charset=utf-8 Content-Disposition: inline [ resending it with changed subject. looks like it was ignored by bug-followup@ ] Try the patch attached, it fixes the bug for me. The same workaround is used by zfs. --GvXjxJ+pjyke8COw Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="tmpfs-sendfile.patch.txt" diff --git a/sys/fs/tmpfs/tmpfs_vnops.c b/sys/fs/tmpfs/tmpfs_vnops.c index db8ceea..a6e4510 100644 --- a/sys/fs/tmpfs/tmpfs_vnops.c +++ b/sys/fs/tmpfs/tmpfs_vnops.c @@ -43,6 +43,8 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include +#include #include #include #include @@ -428,15 +430,72 @@ tmpfs_setattr(struct vop_setattr_args *v) } /* --------------------------------------------------------------------- */ +static int +tmpfs_nocacheread(vm_object_t tobj, vm_pindex_t idx, + vm_offset_t offset, size_t tlen, struct uio *uio) +{ + vm_page_t m; + int error; + + VM_OBJECT_LOCK(tobj); + vm_object_pip_add(tobj, 1); + m = vm_page_grab(tobj, idx, VM_ALLOC_WIRED | + VM_ALLOC_ZERO | VM_ALLOC_NORMAL | VM_ALLOC_RETRY); + if (m->valid != VM_PAGE_BITS_ALL) { + if (vm_pager_has_page(tobj, idx, NULL, NULL)) { + error = vm_pager_get_pages(tobj, &m, 1, 0); + if (error != 0) { + printf("tmpfs get pages from pager error [read]\n"); + goto out; + } + } else + vm_page_zero_invalid(m, TRUE); + } + VM_OBJECT_UNLOCK(tobj); + error = uiomove_fromphys(&m, offset, tlen, uio); + VM_OBJECT_LOCK(tobj); +out: + vm_page_lock_queues(); + vm_page_unwire(m, TRUE); + vm_page_unlock_queues(); + vm_page_wakeup(m); + vm_object_pip_subtract(tobj, 1); + VM_OBJECT_UNLOCK(tobj); + + return (error); +} + +static __inline int +tmpfs_nocacheread_buf(vm_object_t tobj, vm_pindex_t idx, + vm_offset_t offset, size_t tlen, void *buf) +{ + struct uio uio; + struct iovec iov; + + uio.uio_iovcnt = 1; + uio.uio_iov = &iov; + iov.iov_base = buf; + iov.iov_len = tlen; + + uio.uio_offset = 0; + uio.uio_resid = tlen; + uio.uio_rw = UIO_READ; + uio.uio_segflg = UIO_SYSSPACE; + uio.uio_td = curthread; + + return (tmpfs_nocacheread(tobj, idx, offset, tlen, &uio)); +} static int tmpfs_mappedread(vm_object_t vobj, vm_object_t tobj, size_t len, struct uio *uio) { + struct sf_buf *sf; vm_pindex_t idx; vm_page_t m; vm_offset_t offset; off_t addr; size_t tlen; + char *ma; int error; addr = uio->uio_offset; @@ -460,33 +519,30 @@ lookupvpg: vm_page_wakeup(m); VM_OBJECT_UNLOCK(vobj); return (error); + } else if (m != NULL && uio->uio_segflg == UIO_NOCOPY) { + if (vm_page_sleep_if_busy(m, FALSE, "tmfsmr")) + goto lookupvpg; + vm_page_busy(m); + VM_OBJECT_UNLOCK(vobj); + sched_pin(); + sf = sf_buf_alloc(m, SFB_CPUPRIVATE); + ma = (char *)sf_buf_kva(sf); + error = tmpfs_nocacheread_buf(tobj, idx, offset, tlen, + ma + offset); + if (error == 0) { + uio->uio_offset += tlen; + uio->uio_resid -= tlen; + } + sf_buf_free(sf); + sched_unpin(); + VM_OBJECT_LOCK(vobj); + vm_page_wakeup(m); + VM_OBJECT_UNLOCK(vobj); + return (error); } VM_OBJECT_UNLOCK(vobj); nocache: - VM_OBJECT_LOCK(tobj); - vm_object_pip_add(tobj, 1); - m = vm_page_grab(tobj, idx, VM_ALLOC_WIRED | - VM_ALLOC_ZERO | VM_ALLOC_NORMAL | VM_ALLOC_RETRY); - if (m->valid != VM_PAGE_BITS_ALL) { - if (vm_pager_has_page(tobj, idx, NULL, NULL)) { - error = vm_pager_get_pages(tobj, &m, 1, 0); - if (error != 0) { - printf("tmpfs get pages from pager error [read]\n"); - goto out; - } - } else - vm_page_zero_invalid(m, TRUE); - } - VM_OBJECT_UNLOCK(tobj); - error = uiomove_fromphys(&m, offset, tlen, uio); - VM_OBJECT_LOCK(tobj); -out: - vm_page_lock_queues(); - vm_page_unwire(m, TRUE); - vm_page_unlock_queues(); - vm_page_wakeup(m); - vm_object_pip_subtract(tobj, 1); - VM_OBJECT_UNLOCK(tobj); + error = tmpfs_nocacheread(tobj, idx, offset, tlen, uio); return (error); } --GvXjxJ+pjyke8COw-- From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 21:54:41 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A88581065676; Fri, 2 Oct 2009 21:54:41 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 78E5B8FC0A; Fri, 2 Oct 2009 21:54:41 +0000 (UTC) Received: from freefall.freebsd.org (linimon@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n92LsfrT007928; Fri, 2 Oct 2009 21:54:41 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n92LsftZ007924; Fri, 2 Oct 2009 21:54:41 GMT (envelope-from linimon) Date: Fri, 2 Oct 2009 21:54:41 GMT Message-Id: <200910022154.n92LsftZ007924@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/139312: [tmpfs] [patch] tmpfs mmap synchronization bug X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 21:54:41 -0000 Old Synopsis: [PATCH] tmpfs mmap synchronization bug New Synopsis: [tmpfs] [patch] tmpfs mmap synchronization bug Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri Oct 2 21:54:26 UTC 2009 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=139312 From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 22:23:29 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A73C4106566B; Fri, 2 Oct 2009 22:23:29 +0000 (UTC) (envelope-from gleb.kurtsou@gmail.com) Received: from mail-fx0-f222.google.com (mail-fx0-f222.google.com [209.85.220.222]) by mx1.freebsd.org (Postfix) with ESMTP id D65C38FC17; Fri, 2 Oct 2009 22:23:28 +0000 (UTC) Received: by fxm22 with SMTP id 22so1596087fxm.36 for ; Fri, 02 Oct 2009 15:23:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=x4kj5pMZQPuXvxrL7R63qHWQ90ECRPaP0WX/RjCexis=; b=D4F5txsikM6YlXDkPejDquH3ydl62wim6Q+r0o+VQ9G3T7g9CLaycrrRYcEfeoM0IE lx3zfdjN7fIjW6OTCEo+IGHWDzogD6yjwB06X4GLfcYEfugBvNXhBA563P2G3NQna7Hu iiFQLLDJrJy97+N2FN9T92aDEYGG9YmED5xj4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=Jn68+QGN3Up4lIrWhHN9KI0m4QmunGA6pBNSCH6a+VeNr8ta8F0FTo5HPOP2XzWYO0 o+Or2DNveLIe3z2kTQDVy9F91g/vemWcRlU6fa74REW5ehNtmxg9EDycZ3zgKodxSv7n dBuqYQhy44ZTsi1xYusbdH/sIRLhZ4XgJLHyo= Received: by 10.86.169.25 with SMTP id r25mr2868479fge.17.1254522207831; Fri, 02 Oct 2009 15:23:27 -0700 (PDT) Received: from localhost (lan-78-157-90-54.vln.skynet.lt [78.157.90.54]) by mx.google.com with ESMTPS id d4sm318372fga.2.2009.10.02.15.23.26 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 02 Oct 2009 15:23:27 -0700 (PDT) Date: Sat, 3 Oct 2009 01:23:06 +0300 From: Gleb Kurtsou To: bug-followup@FreeBSD.org, delphij@FreeBSD.org, gprspb@mail.ru Message-ID: <20091002222306.GA1729@tops> References: <200908121810.n7CIA8Qv058688@freefall.freebsd.org> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="5vNYLRcllDrimb99" Content-Disposition: inline In-Reply-To: <200908121810.n7CIA8Qv058688@freefall.freebsd.org> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@FreeBSD.org Subject: Re: kern/122038: [tmpfs] [panic] tmpfs: panic: tmpfs_alloc_vp: type 0xc7d2fab0 0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 22:23:29 -0000 --5vNYLRcllDrimb99 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Could you test following patch. I think it should fix the issue, but it seems locking for tn_parent field is missing in some places. It needs a closer look and more thorough testing. --5vNYLRcllDrimb99 Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="tmpfs-rmparent.patch.txt" diff --git a/sys/fs/tmpfs/tmpfs_subr.c b/sys/fs/tmpfs/tmpfs_subr.c index dad634e..2d28058 100644 --- a/sys/fs/tmpfs/tmpfs_subr.c +++ b/sys/fs/tmpfs/tmpfs_subr.c @@ -375,6 +375,7 @@ loop: vp->v_op = &tmpfs_fifoop_entries; break; case VDIR: + MPASS(node->tn_dir.tn_parent != NULL); if (node->tn_dir.tn_parent == node) vp->v_vflag |= VV_ROOT; break; @@ -653,6 +654,9 @@ tmpfs_dir_getdotdotdent(struct tmpfs_node *node, struct uio *uio) TMPFS_VALIDATE_DIR(node); MPASS(uio->uio_offset == TMPFS_DIRCOOKIE_DOTDOT); + if (node->tn_dir.tn_parent == NULL) + return ENOENT; + dent.d_fileno = node->tn_dir.tn_parent->tn_id; dent.d_type = DT_DIR; dent.d_namlen = 2; diff --git a/sys/fs/tmpfs/tmpfs_vnops.c b/sys/fs/tmpfs/tmpfs_vnops.c index db8ceea..7caac14 100644 --- a/sys/fs/tmpfs/tmpfs_vnops.c +++ b/sys/fs/tmpfs/tmpfs_vnops.c @@ -88,6 +88,10 @@ tmpfs_lookup(struct vop_cachedlookup_args *v) if (cnp->cn_flags & ISDOTDOT) { int ltype = 0; + if (dnode->tn_dir.tn_parent == NULL) { + error = ENOENT; + goto out; + } ltype = VOP_ISLOCKED(dvp); vhold(dvp); VOP_UNLOCK(dvp, 0); @@ -98,6 +102,10 @@ tmpfs_lookup(struct vop_cachedlookup_args *v) vn_lock(dvp, ltype | LK_RETRY); vdrop(dvp); } else if (cnp->cn_namelen == 1 && cnp->cn_nameptr[0] == '.') { + if (dnode->tn_dir.tn_parent == NULL) { + error = ENOENT; + goto out; + } VREF(dvp); *vpp = dvp; error = 0; @@ -959,7 +967,8 @@ tmpfs_rename(struct vop_rename_args *v) * with stale nodes. */ n = tdnode; while (n != n->tn_dir.tn_parent) { - if (n == fnode) { + MPASS(n->tn_dir.tn_parent != NULL); + if (n == fnode || n->tn_dir.tn_parent == NULL) { error = EINVAL; if (newname != NULL) free(newname, M_TMPFSNAME); @@ -1112,6 +1121,7 @@ tmpfs_rmdir(struct vop_rmdir_args *v) node->tn_dir.tn_parent->tn_links--; node->tn_dir.tn_parent->tn_status |= TMPFS_NODE_ACCESSED | \ TMPFS_NODE_CHANGED | TMPFS_NODE_MODIFIED; + node->tn_dir.tn_parent = NULL; cache_purge(dvp); cache_purge(vp); --5vNYLRcllDrimb99-- From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 22:30:04 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 361E81065672 for ; Fri, 2 Oct 2009 22:30:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 252868FC14 for ; Fri, 2 Oct 2009 22:30:04 +0000 (UTC) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n92MU3dD038451 for ; Fri, 2 Oct 2009 22:30:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n92MU348038447; Fri, 2 Oct 2009 22:30:03 GMT (envelope-from gnats) Date: Fri, 2 Oct 2009 22:30:03 GMT Message-Id: <200910022230.n92MU348038447@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Gleb Kurtsou Cc: Subject: Re: kern/122038: [tmpfs] [panic] tmpfs: panic: tmpfs_alloc_vp: type 0xc7d2fab0 0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Gleb Kurtsou List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 22:30:04 -0000 The following reply was made to PR kern/122038; it has been noted by GNATS. From: Gleb Kurtsou To: bug-followup@FreeBSD.org, delphij@FreeBSD.org, gprspb@mail.ru Cc: freebsd-fs@FreeBSD.org Subject: Re: kern/122038: [tmpfs] [panic] tmpfs: panic: tmpfs_alloc_vp: type 0xc7d2fab0 0 Date: Sat, 3 Oct 2009 01:23:06 +0300 --5vNYLRcllDrimb99 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Could you test following patch. I think it should fix the issue, but it seems locking for tn_parent field is missing in some places. It needs a closer look and more thorough testing. --5vNYLRcllDrimb99 Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="tmpfs-rmparent.patch.txt" diff --git a/sys/fs/tmpfs/tmpfs_subr.c b/sys/fs/tmpfs/tmpfs_subr.c index dad634e..2d28058 100644 --- a/sys/fs/tmpfs/tmpfs_subr.c +++ b/sys/fs/tmpfs/tmpfs_subr.c @@ -375,6 +375,7 @@ loop: vp->v_op = &tmpfs_fifoop_entries; break; case VDIR: + MPASS(node->tn_dir.tn_parent != NULL); if (node->tn_dir.tn_parent == node) vp->v_vflag |= VV_ROOT; break; @@ -653,6 +654,9 @@ tmpfs_dir_getdotdotdent(struct tmpfs_node *node, struct uio *uio) TMPFS_VALIDATE_DIR(node); MPASS(uio->uio_offset == TMPFS_DIRCOOKIE_DOTDOT); + if (node->tn_dir.tn_parent == NULL) + return ENOENT; + dent.d_fileno = node->tn_dir.tn_parent->tn_id; dent.d_type = DT_DIR; dent.d_namlen = 2; diff --git a/sys/fs/tmpfs/tmpfs_vnops.c b/sys/fs/tmpfs/tmpfs_vnops.c index db8ceea..7caac14 100644 --- a/sys/fs/tmpfs/tmpfs_vnops.c +++ b/sys/fs/tmpfs/tmpfs_vnops.c @@ -88,6 +88,10 @@ tmpfs_lookup(struct vop_cachedlookup_args *v) if (cnp->cn_flags & ISDOTDOT) { int ltype = 0; + if (dnode->tn_dir.tn_parent == NULL) { + error = ENOENT; + goto out; + } ltype = VOP_ISLOCKED(dvp); vhold(dvp); VOP_UNLOCK(dvp, 0); @@ -98,6 +102,10 @@ tmpfs_lookup(struct vop_cachedlookup_args *v) vn_lock(dvp, ltype | LK_RETRY); vdrop(dvp); } else if (cnp->cn_namelen == 1 && cnp->cn_nameptr[0] == '.') { + if (dnode->tn_dir.tn_parent == NULL) { + error = ENOENT; + goto out; + } VREF(dvp); *vpp = dvp; error = 0; @@ -959,7 +967,8 @@ tmpfs_rename(struct vop_rename_args *v) * with stale nodes. */ n = tdnode; while (n != n->tn_dir.tn_parent) { - if (n == fnode) { + MPASS(n->tn_dir.tn_parent != NULL); + if (n == fnode || n->tn_dir.tn_parent == NULL) { error = EINVAL; if (newname != NULL) free(newname, M_TMPFSNAME); @@ -1112,6 +1121,7 @@ tmpfs_rmdir(struct vop_rmdir_args *v) node->tn_dir.tn_parent->tn_links--; node->tn_dir.tn_parent->tn_status |= TMPFS_NODE_ACCESSED | \ TMPFS_NODE_CHANGED | TMPFS_NODE_MODIFIED; + node->tn_dir.tn_parent = NULL; cache_purge(dvp); cache_purge(vp); --5vNYLRcllDrimb99-- From owner-freebsd-fs@FreeBSD.ORG Fri Oct 2 23:38:26 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3BE811065679; Fri, 2 Oct 2009 23:38:26 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-yx0-f184.google.com (mail-yx0-f184.google.com [209.85.210.184]) by mx1.freebsd.org (Postfix) with ESMTP id D4A8B8FC15; Fri, 2 Oct 2009 23:38:25 +0000 (UTC) Received: by yxe14 with SMTP id 14so1779647yxe.7 for ; Fri, 02 Oct 2009 16:38:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=vq+gNFyaulKkoUI2nme1rf9bQlOFX5/+8FmaIgQP1MU=; b=DBXQvSSW9KFyvGpl3a4KPrF6HbO0f2kYf7SsHc84s1st38KM+adKYislvyMwg0fhVV Y+Rz+OWOngrLlWEkDP2QrNJDpbxfg5qRcGaXxVKepZ1w0QIQg9si0dOp4KMjZRQewL1Z RQntQBkRgwppn0pxv54uHnQDumlkFTDqQuLv0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=VIfHkRD/D308+eAxG1QhD8z93ZP7kWMbZs4XGpIot7yrzF3G0hGTDxtBCz62wLahHa alQriGcE8KNaWx7530EnkJJrYVNJcJhoXT3RovMkpJvADT98szeljWE5XrKb9/3CqANq z0ql5bHlBp/kRqYjDRSy7u6uAXGtL1ZtDXhjs= MIME-Version: 1.0 Sender: artemb@gmail.com Received: by 10.91.27.15 with SMTP id e15mr619898agj.3.1254526705025; Fri, 02 Oct 2009 16:38:25 -0700 (PDT) In-Reply-To: <20091002184526.GA1660@garage.freebsd.pl> References: <4AC1E540.9070001@fsn.hu> <4AC5B2C7.2000200@fsn.hu> <20091002184526.GA1660@garage.freebsd.pl> Date: Fri, 2 Oct 2009 16:38:24 -0700 X-Google-Sender-Auth: d9972f5a04db654b Message-ID: From: Artem Belevich To: Pawel Jakub Dawidek Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ARC size constantly shrinks, then ZFS slows down extremely X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Oct 2009 23:38:26 -0000 With the patch, if vfs.zfs.arc_min is set high enough, the system locks up. On a box with 8G or RAM I had arc_min=3D6G and arc_max=3D7G. Once ARC grew to ~5.8G as reported by kstat.zfs.misc.arcstats.size, number of wired pages grew to ~7400MB and the processes got stuck in 'vmwait' state. I had to reboot in order to recover. On one hand setting arc_min can be considered a pilot error. On the other, it may be a good idea to allow system to reclaim memory from ARC even if ARC is smaller than arc_min if the system really really needs it. The question is how to define "really needs it". On a side note, it appears that wired page count tends to be substantially larger than ARC size. I.e. in my case if ARC size grows to 6G, wired page count is about 1.5G bigger. Perhaps we should allow reclaiming memory --Artem On Fri, Oct 2, 2009 at 11:45 AM, Pawel Jakub Dawidek wrot= e: > On Fri, Oct 02, 2009 at 09:59:03AM +0200, Attila Nagy wrote: >> Backing out this change from the 8-STABLE kernel: >> http://svn.freebsd.org/viewvc/base/head/sys/cddl/contrib/opensolaris/uts= /common/fs/zfs/arc.c?r1=3D191901&r2=3D191902 >> >> makes it survive about half and hour of IMAP searching. Of course only >> time will tell whether this helps in the long run, but so far 10/10 >> tries succeeded to kill the machine with this method... > > Could you try this patch: > > =A0 =A0 =A0 =A0http://people.freebsd.org/~pjd/patches/arc.c.4.patch > > -- > Pawel Jakub Dawidek =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http://ww= w.wheel.pl > pjd@FreeBSD.org =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http:= //www.FreeBSD.org > FreeBSD committer =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Am I Ev= il? Yes, I Am! > From owner-freebsd-fs@FreeBSD.ORG Sat Oct 3 00:09:19 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 52CE0106568F for ; Sat, 3 Oct 2009 00:09:19 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello087206049004.chello.pl [87.206.49.4]) by mx1.freebsd.org (Postfix) with ESMTP id 7228C8FC19 for ; Sat, 3 Oct 2009 00:09:17 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 9A90145E93; Sat, 3 Oct 2009 02:09:15 +0200 (CEST) Received: from localhost (chello087206049004.chello.pl [87.206.49.4]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id 2BEFA45684; Sat, 3 Oct 2009 02:09:09 +0200 (CEST) Date: Sat, 3 Oct 2009 02:09:09 +0200 From: Pawel Jakub Dawidek To: Artem Belevich Message-ID: <20091003000909.GD1660@garage.freebsd.pl> References: <4AC1E540.9070001@fsn.hu> <4AC5B2C7.2000200@fsn.hu> <20091002184526.GA1660@garage.freebsd.pl> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="Q0rSlbzrZN6k9QnT" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 9.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: ARC size constantly shrinks, then ZFS slows down extremely X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Oct 2009 00:09:19 -0000 --Q0rSlbzrZN6k9QnT Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Oct 02, 2009 at 04:38:24PM -0700, Artem Belevich wrote: > With the patch, if vfs.zfs.arc_min is set high enough, the system locks u= p. >=20 > On a box with 8G or RAM I had arc_min=3D6G and arc_max=3D7G. Once ARC grew > to ~5.8G as reported by kstat.zfs.misc.arcstats.size, number of wired > pages grew to ~7400MB and the processes got stuck in 'vmwait' state. I > had to reboot in order to recover. >=20 > On one hand setting arc_min can be considered a pilot error. On the > other, it may be a good idea to allow system to reclaim memory from > ARC even if ARC is smaller than arc_min if the system really really > needs it. The question is how to define "really needs it". >=20 > On a side note, it appears that wired page count tends to be > substantially larger than ARC size. I.e. in my case if ARC size grows > to 6G, wired page count is about 1.5G bigger. Perhaps we should allow > reclaiming memory Before we start debuging pathological cases, could you try the patch with defaul settings? Eventually with vm.kmem_size set to the amount of RAM you have. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --Q0rSlbzrZN6k9QnT Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFKxpYlForvXbEpPzQRAonOAKDraj7ZSbTaC/31Up5xzjDd0HIfLACgpf4m WdRgq+8TSAI2nvZbbQKMg2c= =Ur6x -----END PGP SIGNATURE----- --Q0rSlbzrZN6k9QnT-- From owner-freebsd-fs@FreeBSD.ORG Sat Oct 3 00:59:37 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 559661065692; Sat, 3 Oct 2009 00:59:37 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-yw0-f197.google.com (mail-yw0-f197.google.com [209.85.211.197]) by mx1.freebsd.org (Postfix) with ESMTP id EA4898FC0C; Sat, 3 Oct 2009 00:59:36 +0000 (UTC) Received: by ywh35 with SMTP id 35so4377598ywh.7 for ; Fri, 02 Oct 2009 17:59:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type; bh=4VyeEIcrzDi+lOMGrICjduOOHbXu7zcRYqPuIAgsYfE=; b=bp1su8DuHqpHGnaO+AqTD49Hm7cuO+41On64lcMNm/NGmUleox+O7QxyA4v9F2SCt5 IOb5cLMAjIBA9GqTlPtua6rDsNusrLAkQyEN6nKKl14i3gSh8qBm8QyJ+L73seQsxNVT lWanMz1Qov+BATINT/yLgCFTOOVj56cvX4OaY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=wlrG20OcDE5GTEKaPEXFgMKYK7UlrKlvfZclGg+Mzji/YB/vwUN0TzLUokS0Xe8vJQ 6bjZoMEmXvQRz9DwzqvX8gEFANHICuDUKGAFBvei6TBNgv7iBXvksv93AVSKPcTPh6Qi bNaEv5COCBEPOjv5DWIw3UUpC77W6nT1z8MVk= MIME-Version: 1.0 Sender: artemb@gmail.com Received: by 10.91.178.19 with SMTP id f19mr1811136agp.33.1254531576337; Fri, 02 Oct 2009 17:59:36 -0700 (PDT) In-Reply-To: <20091003000909.GD1660@garage.freebsd.pl> References: <4AC1E540.9070001@fsn.hu> <4AC5B2C7.2000200@fsn.hu> <20091002184526.GA1660@garage.freebsd.pl> <20091003000909.GD1660@garage.freebsd.pl> Date: Fri, 2 Oct 2009 17:59:36 -0700 X-Google-Sender-Auth: 321a7f472bea9057 Message-ID: From: Artem Belevich To: Pawel Jakub Dawidek Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ARC size constantly shrinks, then ZFS slows down extremely X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Oct 2009 00:59:37 -0000 > Before we start debuging pathological cases, could you try the patch > with defaul settings? Eventually with vm.kmem_size set to the amount of > RAM you have. System runs stable/8 r197716 on 4-core amd64 with 8G of RAM. With default /boot/loader.conf. Kernel comes up with following parameters: vm.kmem_size: 2764533760 vfs.zfs.arc_min: 215979200 vfs.zfs.arc_max: 1727833600 Under load ARC size reaches ~1.7G. At that time top reports: Mem: 47M Active, 11M Inact, 2158M Wired, 268K Cache, 21M Buf, 5693M Free However, as the FS load continues, ARC size, stays at 1.7G for couple of minutes, then shrinks down to 1.2G, then slowly grows to 1.7G, stays there for a little and then the shrink/grow cycle repeats. Throughout the test there's always ~5G of *free* memory. =============================================================== Now, the same experiment, with vm.kmem_size=8G vm.kmem_size: 8589934592 vfs.zfs.arc_min: 939524096 vfs.zfs.arc_max: 7516192768 ARC grows to 6.2G: Mem: 47M Active, 13M Inact, 7376M Wired, 31M Buf, 473M Free Then it quickly shrinks to 4.6G and grows to 6.2G again, shrinks again, etc.. What's different from the previous case is that after a while ZFS adjusts target size (kstat.zfs.misc.arcstats.c) down to ~5.8G and after that ZFS size oscillates between 4.2G and 5.6G. Another observation -- ARC shrinking happens when system is left with ~512M of free memory. Yet another observation is that even with ARC peak of ~5.8G, system has about 7.5G wired. Where did almost 2G of difference go? Fragmentation? I've tried both experiments with and without L2ARC -- behavior seems to be the same. --Artem From owner-freebsd-fs@FreeBSD.ORG Sat Oct 3 09:30:04 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E38EF1065670 for ; Sat, 3 Oct 2009 09:30:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D1B908FC18 for ; Sat, 3 Oct 2009 09:30:03 +0000 (UTC) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n939U3WE077254 for ; Sat, 3 Oct 2009 09:30:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n939U3kc077250; Sat, 3 Oct 2009 09:30:03 GMT (envelope-from gnats) Date: Sat, 3 Oct 2009 09:30:03 GMT Message-Id: <200910030930.n939U3kc077250@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Gleb Kurtsou Cc: Subject: Re: kern/138367: [tmpfs] [panic] 'panic: Assertion pages > 0 failed' when running regression/tmpfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Gleb Kurtsou List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Oct 2009 09:30:04 -0000 The following reply was made to PR kern/138367; it has been noted by GNATS. From: Gleb Kurtsou To: bug-followup@FreeBSD.org, bruce@cran.org.uk Cc: Subject: Re: kern/138367: [tmpfs] [panic] 'panic: Assertion pages > 0 failed' when running regression/tmpfs Date: Sat, 3 Oct 2009 12:21:08 +0300 --7AUc2qLy4jB3hD7Z Content-Type: text/plain; charset=utf-8 Content-Disposition: inline I wasn't able to trigger it on amd64, but there were several integer overflow bugs. Besides there was inconsistency in setting max values. Max pages was set to SIZE_MAX (if no value provided by user), but max file size depended on available swap/memory at the moment of mounting filesystem. I've set max file size to 4GB (of memory limit set by user). It can be changed to uint64_t max, but using 4GB seems to be sufficient limit to prevent resource exhaustion. Would you try this patch, I have no i386 system running to test it. --7AUc2qLy4jB3hD7Z Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="tmpfs-intoverflow.patch.txt" diff --git a/sys/fs/tmpfs/tmpfs.h b/sys/fs/tmpfs/tmpfs.h index ffd705f..42a0e5d 100644 --- a/sys/fs/tmpfs/tmpfs.h +++ b/sys/fs/tmpfs/tmpfs.h @@ -470,6 +470,11 @@ int tmpfs_truncate(struct vnode *, off_t); #define TMPFS_PAGES_RESERVED (4 * 1024 * 1024 / PAGE_SIZE) /* + * Set maximum file size to 4 GB. + */ +#define TMPFS_MAXFILESIZE UINT_MAX + +/* * Returns information about the number of available memory pages, * including physical and virtual ones. * diff --git a/sys/fs/tmpfs/tmpfs_vfsops.c b/sys/fs/tmpfs/tmpfs_vfsops.c index f0ae6be..cdefaba 100644 --- a/sys/fs/tmpfs/tmpfs_vfsops.c +++ b/sys/fs/tmpfs/tmpfs_vfsops.c @@ -82,57 +82,6 @@ static const char *tmpfs_opts[] = { }; /* --------------------------------------------------------------------- */ - -#define SWI_MAXMIB 3 - -static u_int -get_swpgtotal(void) -{ - struct xswdev xsd; - char *sname = "vm.swap_info"; - int soid[SWI_MAXMIB], oid[2]; - u_int unswdev, total, dmmax, nswapdev; - size_t mibi, len; - - total = 0; - - len = sizeof(dmmax); - if (kernel_sysctlbyname(curthread, "vm.dmmax", &dmmax, &len, - NULL, 0, NULL, 0) != 0) - return total; - - len = sizeof(nswapdev); - if (kernel_sysctlbyname(curthread, "vm.nswapdev", - &nswapdev, &len, - NULL, 0, NULL, 0) != 0) - return total; - - mibi = (SWI_MAXMIB - 1) * sizeof(int); - oid[0] = 0; - oid[1] = 3; - - if (kernel_sysctl(curthread, oid, 2, - soid, &mibi, (void *)sname, strlen(sname), - NULL, 0) != 0) - return total; - - mibi = (SWI_MAXMIB - 1); - for (unswdev = 0; unswdev < nswapdev; ++unswdev) { - soid[mibi] = unswdev; - len = sizeof(struct xswdev); - if (kernel_sysctl(curthread, - soid, mibi + 1, &xsd, &len, NULL, 0, - NULL, 0) != 0) - return total; - if (len == sizeof(struct xswdev)) - total += (xsd.xsw_nblks - dmmax); - } - - /* Not Reached */ - return total; -} - -/* --------------------------------------------------------------------- */ static int tmpfs_node_ctor(void *mem, int size, void *arg, int flags) { @@ -186,7 +135,7 @@ tmpfs_mount(struct mount *mp) int error; /* Size counters. */ ino_t nodes_max; - size_t size_max; + off_t size_max; /* Root node attributes. */ uid_t root_uid; @@ -230,8 +179,7 @@ tmpfs_mount(struct mount *mp) /* Do not allow mounts if we do not have enough memory to preserve * the minimum reserved pages. */ - mem_size = cnt.v_free_count + cnt.v_inactive_count + get_swpgtotal(); - mem_size -= mem_size > cnt.v_wire_count ? cnt.v_wire_count : mem_size; + mem_size = tmpfs_mem_info(); if (mem_size < TMPFS_PAGES_RESERVED) return ENOSPC; @@ -239,14 +187,17 @@ tmpfs_mount(struct mount *mp) * allowed to use, based on the maximum size the user passed in * the mount structure. A value of zero is treated as if the * maximum available space was requested. */ - if (size_max < PAGE_SIZE || size_max >= SIZE_MAX) - pages = SIZE_MAX; + /* XXX Choose maximum values to prevent integer overflow */ + if (size_max < PAGE_SIZE || size_max > SSIZE_MAX - PAGE_SIZE) + pages = SSIZE_MAX - PAGE_SIZE; else pages = howmany(size_max, PAGE_SIZE); MPASS(pages > 0); + CTASSERT(sizeof(ino_t) == sizeof(uint32_t)); + /* Set maximum node number to 2GB to prevent integer overflow. */ if (nodes_max <= 3) - nodes = 3 + pages * PAGE_SIZE / 1024; + nodes = qmin(pages + 3, INT_MAX); else nodes = nodes_max; MPASS(nodes >= 3); @@ -258,7 +209,11 @@ tmpfs_mount(struct mount *mp) mtx_init(&tmp->allnode_lock, "tmpfs allnode lock", NULL, MTX_DEF); tmp->tm_nodes_max = nodes; tmp->tm_nodes_inuse = 0; - tmp->tm_maxfilesize = (u_int64_t)(cnt.v_page_count + get_swpgtotal()) * PAGE_SIZE; + if ((u_int64_t)pages < (OFF_MAX >> PAGE_SHIFT)) + tmp->tm_maxfilesize = qmin((u_int64_t)(pages) * PAGE_SIZE, + TMPFS_MAXFILESIZE); + else + tmp->tm_maxfilesize = TMPFS_MAXFILESIZE; LIST_INIT(&tmp->tm_nodes_used); tmp->tm_pages_max = pages; --7AUc2qLy4jB3hD7Z-- From owner-freebsd-fs@FreeBSD.ORG Sat Oct 3 13:10:51 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 07C511065672 for ; Sat, 3 Oct 2009 13:10:51 +0000 (UTC) (envelope-from "") Received: from mail.internationalconspiracy.org (mail.internationalconspiracy.org [85.234.142.62]) by mx1.freebsd.org (Postfix) with ESMTP id BA10C8FC14 for ; Sat, 3 Oct 2009 13:10:50 +0000 (UTC) Received: from localhost (mail.internationalconspiracy.org [85.234.142.62]) by mail.internationalconspiracy.org (Postfix) with SMTP id B60F42ABC4 for ; Sat, 3 Oct 2009 13:53:58 +0100 (BST) Received: from [192.168.124.182] (82-43-101-190.cable.ubr09.croy.blueyonder.co.uk [82.43.101.190]) by mail.internationalconspiracy.org (Postfix) with ESMTPSA id 79AF12ABA0; Sat, 3 Oct 2009 13:53:55 +0100 (BST) From: Alex Trull To: Artem Belevich In-Reply-To: References: <8c9ae7950910011322j1a6b66fcp73615cc17ae20328@mail.gmail.com> Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="=-q41iUhPpdx33qog7nLO5" Date: Sat, 03 Oct 2009 13:53:55 +0100 Message-Id: <1254574435.7247.15.camel@porksoda.turandot.home> Mime-Version: 1.0 X-Mailer: Evolution 2.26.1 X-DSPAM-Result: Innocent X-DSPAM-Processed: Sat Oct 3 13:53:58 2009 X-DSPAM-Confidence: 0.9994 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 290,4ac74966781481551944914 Cc: freebsd-fs@freebsd.org, Alexander Shevchenko Subject: Re: ARC & L2ARC efficiency X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Oct 2009 13:10:51 -0000 --=-q41iUhPpdx33qog7nLO5 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable I went straight for munin since I like to track these stats long term along with everything else going on - there are some older scripts I put on muninexchange but since zfs13 came to 7. I split and updated them to handle the newer counters including l2arc: scripts zfs-* : http://trull.org/~alex/src/FreeBSD/muninplugins/ example graphs (forgive the amazingly redundant path) : http://web.internationalconspiracy.org/munin/internationalconspiracy.org/po= tjie.internationalconspiracy.org.html#Filesystem If you want to see more short term performance stats with a while running a filesystem benchmark test (or just under normal load) I recommend 'zpool iostat -v poolname N' where poolname is obvious and N is the number of seconds for refresh. This not only shows IO read and write but also the throughput per disk, including the l2arc cache. -- Alex On Thu, 2009-10-01 at 16:20 -0700, Artem Belevich wrote: > Here's another script: > http://www.solarisinternals.com/wiki/index.php/Arcstat >=20 > Attached hacked FreeBSD version. >=20 > --Artem >=20 >=20 >=20 > On Thu, Oct 1, 2009 at 4:00 PM, Artem Belevich wrote: > > There's a pretty useful script to present ARC stats (alas, L2ARC info > > is not included) in a readable way: > > http://cuddletech.com/arc_summary/ > > > > I've attaches somewhat hacked (and a bit outdated) version that runs on= FreeBSD. > > > > --Artem > > > > > > > > On Thu, Oct 1, 2009 at 1:22 PM, Alexander Shevchenko wrote: > >> Good time of day! > >> > >> How could i check the efficiency of ARC? > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" --=-q41iUhPpdx33qog7nLO5 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iEYEABECAAYFAkrHSWAACgkQey4m6/eWxTTykwCfcKpKcR0+1NaL/2d4LsA5MfKP gPsAn0cSEf2GrVuhyKEsNYpAKOWyZqVj =r5mw -----END PGP SIGNATURE----- --=-q41iUhPpdx33qog7nLO5--