From owner-freebsd-fs@FreeBSD.ORG Sun Apr 4 15:00:30 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1B16D106564A for ; Sun, 4 Apr 2010 15:00:30 +0000 (UTC) (envelope-from bsd@nezmer.info) Received: from mail.nezmer.info (nezmer.info [97.107.142.36]) by mx1.freebsd.org (Postfix) with ESMTP id F0C8F8FC13 for ; Sun, 4 Apr 2010 15:00:29 +0000 (UTC) Date: Sun, 4 Apr 2010 10:47:14 -0400 From: Nezmer To: freebsd-fs@freebsd.org Message-ID: <20100404144714.GA21331@mail> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.20 (2009-06-14) Subject: 8-STABLE/amd64: XFS panic(backtrace) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 15:00:30 -0000 Hi, Whenever I mount an XFS partition, I get a panic within 1 or 2 minutes. Backtrace and relevant files are available here: http://nezmer.info/public/xfs_report.tar.gz From owner-freebsd-fs@FreeBSD.ORG Sun Apr 4 17:37:44 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E7F9B106566C for ; Sun, 4 Apr 2010 17:37:44 +0000 (UTC) (envelope-from kabaev@gmail.com) Received: from mail-qy0-f195.google.com (mail-qy0-f195.google.com [209.85.221.195]) by mx1.freebsd.org (Postfix) with ESMTP id 9B60D8FC2D for ; Sun, 4 Apr 2010 17:37:44 +0000 (UTC) Received: by qyk33 with SMTP id 33so3535117qyk.28 for ; Sun, 04 Apr 2010 10:37:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:in-reply-to:references:x-mailer:mime-version :content-type; bh=MRU9RX77AxrBFO/pBd4k2c6yc2gtw4Ab+Uljk+MqZ0Q=; b=Y8/DH3pTrveNpr8alufE+bzO/JjeyzbkPpDxZp3cQuLtvRKh4HROve+hreRmpayE8+ RARNxbrArtkmcyRpW2hSi1zhd0Oupv0qodvbCODoYB4TZKtsXM0NCF0qf1zzmD0Bf0Fp zYxYhvVkCh7ixX0AL4NNrIL47jWK5CbRoZwV4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:in-reply-to:references:x-mailer :mime-version:content-type; b=a2kuVr3ikRnatypGPKEBJd+sonu3tBYBImXEET7TiC/7JfySqNfPR0Z7SFu/R0Trt4 2RKxD0EYCrFOrNsPKRKJmn/a8NJCi/bcB3Mc+TT45yR8m/6/tEmWDS2PWOHP4ZSDAS7I DVa3B91z4IHTUCDKm/k8KmN2StxU7ufc6fixU= Received: by 10.224.23.141 with SMTP id r13mr1627518qab.334.1270400999229; Sun, 04 Apr 2010 10:09:59 -0700 (PDT) Received: from kan.dnsalias.net (c-24-63-226-98.hsd1.ma.comcast.net [24.63.226.98]) by mx.google.com with ESMTPS id 21sm6466357qyk.1.2010.04.04.10.09.57 (version=SSLv3 cipher=RC4-MD5); Sun, 04 Apr 2010 10:09:58 -0700 (PDT) Date: Sun, 4 Apr 2010 13:09:52 -0400 From: Alexander Kabaev To: Nezmer Message-ID: <20100404130952.1672cf70@kan.dnsalias.net> In-Reply-To: <20100404144714.GA21331@mail> References: <20100404144714.GA21331@mail> X-Mailer: Claws Mail 3.7.5 (GTK+ 2.18.7; amd64-portbld-freebsd9.0) Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/GjZoHYaVS3_ZzTt=Ke/Ql_3"; protocol="application/pgp-signature" Cc: freebsd-fs@freebsd.org Subject: Re: 8-STABLE/amd64: XFS panic(backtrace) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 17:37:45 -0000 --Sig_/GjZoHYaVS3_ZzTt=Ke/Ql_3 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable On Sun, 4 Apr 2010 10:47:14 -0400 Nezmer wrote: > Hi, > Whenever I mount an XFS partition, I get a panic within 1 or 2 > minutes. >=20 > Backtrace and relevant files are available here: > http://nezmer.info/public/xfs_report.tar.gz > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Do not mount it read-write, it is not supported. --=20 Alexander Kabaev --Sig_/GjZoHYaVS3_ZzTt=Ke/Ql_3 Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iD8DBQFLuMfkQ6z1jMm+XZYRAluqAJwJGI9Ki/UeOLsirCQN2NcKwITCpwCfVFcC cV7hF4Xm/hGhWH3XJxUUYg0= =GRJE -----END PGP SIGNATURE----- --Sig_/GjZoHYaVS3_ZzTt=Ke/Ql_3-- From owner-freebsd-fs@FreeBSD.ORG Sun Apr 4 19:20:21 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 983051065670 for ; Sun, 4 Apr 2010 19:20:21 +0000 (UTC) (envelope-from nekoexmachina@gmail.com) Received: from mail-fx0-f209.google.com (mail-fx0-f209.google.com [209.85.220.209]) by mx1.freebsd.org (Postfix) with ESMTP id 2A8D28FC18 for ; Sun, 4 Apr 2010 19:20:18 +0000 (UTC) Received: by fxm1 with SMTP id 1so2363050fxm.13 for ; Sun, 04 Apr 2010 12:20:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:subject :message-id:mime-version:content-type:content-disposition:user-agent; bh=3hmlbyDWg5XQURT9e/bSWVmP2U3x1dz4AYW45rzBIU0=; b=NXDw9HFNJqawYn0rnHSkrYuPzVrhET+SeHQJhk8i8qBa5pbdJ5SlxfvV9i7moqJJeo lch7VxM2vRfipEArDykulmw5yzU5ZWtV132/yRRTFUxxxaVgaaTqfqFwCLQpH8s4KLnm 2sMICrWxfZGUTOTu62WJ+ZFPKTLtNqVlHlNmU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:subject:message-id:mime-version:content-type :content-disposition:user-agent; b=PRp5ho9y6lFYIE5UQ8x91vHfH+Z50dJNCUVDbiaO/RnVsa/SwP+ftw3Gut2H+akJw9 VbdSXEtiF6LOaLD+8/eAjNdL3qEmyF3+peOF0cYT5S5JFsBL4dX7/vgUbjneYaa7ZI73 xTJpzdKsy4vwweQgl+jWeQX4nDZ8dDCh++gUY= Received: by 10.223.5.71 with SMTP id 7mr4770525fau.48.1270408817610; Sun, 04 Apr 2010 12:20:17 -0700 (PDT) Received: from localhost ([188.134.12.208]) by mx.google.com with ESMTPS id f31sm24681130fkf.18.2010.04.04.12.20.16 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 04 Apr 2010 12:20:17 -0700 (PDT) Date: Sun, 4 Apr 2010 23:18:45 +0400 From: Mikle To: freebsd-fs@freebsd.org Message-ID: <20100404191844.GA5071@takino.homeftp.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.20 (2009-06-14) Subject: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 19:20:21 -0000 Hello, list! I've got some strange problem with one-disk zfs-pool: read/write performance for the files on the fs (dd if=/dev/zero of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me ~70MB/s. pool is about 80% full; PC with the pool has 2GB of ram (1.5 of which is free); i've done no tuning in loader.conf and sysctl.conf for zfs. In dmesg there is no error-messages related to the disk (dmesg|grep ^ad12); s.m.a.r.t. seems OK. Some time ago disk was OK, nothing in software/hardware has changed from that day. Any ideas what could have happen to the disk? Wbr, From owner-freebsd-fs@FreeBSD.ORG Sun Apr 4 20:41:29 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AE9B6106566C for ; Sun, 4 Apr 2010 20:41:29 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta08.westchester.pa.mail.comcast.net (qmta08.westchester.pa.mail.comcast.net [76.96.62.80]) by mx1.freebsd.org (Postfix) with ESMTP id 5E7208FC41 for ; Sun, 4 Apr 2010 20:41:29 +0000 (UTC) Received: from omta21.westchester.pa.mail.comcast.net ([76.96.62.72]) by qmta08.westchester.pa.mail.comcast.net with comcast id 1XT81e0041ZXKqc58YhV0J; Sun, 04 Apr 2010 20:41:29 +0000 Received: from koitsu.dyndns.org ([98.248.46.159]) by omta21.westchester.pa.mail.comcast.net with comcast id 1Yl81e0073S48mS3hYl9tp; Sun, 04 Apr 2010 20:45:09 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 76A499B419; Sun, 4 Apr 2010 13:41:27 -0700 (PDT) Date: Sun, 4 Apr 2010 13:41:27 -0700 From: Jeremy Chadwick To: Mikle Message-ID: <20100404204127.GA53469@icarus.home.lan> References: <20100404191844.GA5071@takino.homeftp.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100404191844.GA5071@takino.homeftp.org> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 20:41:29 -0000 On Sun, Apr 04, 2010 at 11:18:45PM +0400, Mikle wrote: > Hello, list! > I've got some strange problem with one-disk zfs-pool: read/write performance for the files on the fs (dd if=/dev/zero of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me ~70MB/s. > pool is about 80% full; PC with the pool has 2GB of ram (1.5 of which is free); i've done no tuning in loader.conf and sysctl.conf for zfs. In dmesg there is no error-messages related to the disk (dmesg|grep ^ad12); s.m.a.r.t. seems OK. > Some time ago disk was OK, nothing in software/hardware has changed from that day. > Any ideas what could have happen to the disk? Please provide the following output: 1) uname -a 2) sysctl kstat.zfs.misc.arcstats 3) smartctl -a /dev/ad12 Also, does rebooting the box restore write speed (yes, this is a serious question/recommendation)? -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Apr 4 21:27:11 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 91C06106564A for ; Sun, 4 Apr 2010 21:27:11 +0000 (UTC) (envelope-from nekoexmachina@gmail.com) Received: from mail-fx0-f209.google.com (mail-fx0-f209.google.com [209.85.220.209]) by mx1.freebsd.org (Postfix) with ESMTP id 165978FC0A for ; Sun, 4 Apr 2010 21:27:10 +0000 (UTC) Received: by fxm1 with SMTP id 1so2402968fxm.13 for ; Sun, 04 Apr 2010 14:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=WLAM/lWXqHpOPn+xx0WDCsgx79u/l7rNYsO+4T/WeDQ=; b=Djl81PmOVrKSa5b1uhhG11tYR6BQAY6vJ85n7K24R/dCshk1bkFFpWDvLfp5+bQNFc VFIkq8FwTzYhmemkqB1rK3EOQlDLPYK8R3l/N+gFiY/JrQRgff/diO4SdLwv0n6ajwWH yv5SD9j0xqVimpmeAqdXBNPzCsLZBjlEcWIbE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=TlG6MkuqeUGGiQJDSyLGNPy8ZSN/Gsp+hIIsVYDvEo2A89KhACREiZDK1QutgZleJM D/w32aPihuH+t/wwZ5HdF/mmpzs2JWZgr+UQtgC1vUnysvWIuOX9qzln8BGtcsrw5pKJ KHMwvgbSPBiLrAFYxzNUrG6RRnBzZdS2fC6xA= Received: by 10.223.143.67 with SMTP id t3mr4739377fau.16.1270416429802; Sun, 04 Apr 2010 14:27:09 -0700 (PDT) Received: from localhost ([188.134.12.208]) by mx.google.com with ESMTPS id 22sm24846890fkq.47.2010.04.04.14.27.08 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 04 Apr 2010 14:27:09 -0700 (PDT) Date: Mon, 5 Apr 2010 01:25:36 +0400 From: Mikle To: Jeremy Chadwick Message-ID: <20100404212536.GA1159@takino.homeftp.org> References: <20100404191844.GA5071@takino.homeftp.org> <20100404204127.GA53469@icarus.home.lan> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="PNTmBPCT7hxwcZjr" Content-Disposition: inline In-Reply-To: <20100404204127.GA53469@icarus.home.lan> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Apr 2010 21:27:11 -0000 --PNTmBPCT7hxwcZjr Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Sun, Apr 04, 2010 at 01:41:27PM -0700, Jeremy Chadwick wrote: > Please provide the following output: > > 1) uname -a > 2) sysctl kstat.zfs.misc.arcstats > 3) smartctl -a /dev/ad12 FreeBSD takino.zet 8.0-STABLE FreeBSD 8.0-STABLE #0: Mon Mar 8 06:25:34 MSK 2010 root@takino.zet:/usr/obj/usr/src/sys/TAKINO amd64 (TAKINO is pretty basic untuned config, generic config plus ipfw-related things minus '-g' debug flag) sysctl & smart outputs are in the attaches. > Also, does rebooting the box restore write speed (yes, this is a serious > question/recommendation)? Yes, it did slightly: after reboot i got 6MB/s. Also, one more (may be) related thing: there was a power-crash some time ago. Wbr, --PNTmBPCT7hxwcZjr Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="smartctl.ad12.txt" smartctl 5.39.1 2010-01-28 r3054 [FreeBSD 8.0-STABLE amd64] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Green family Device Model: WDC WD10EADS-00M2B0 Serial Number: WD-WMAV50024981 Firmware Version: 01.00A01 User Capacity: 1,000,204,886,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Apr 5 00:53:13 2010 MSD SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (19980) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 230) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 1 3 Spin_Up_Time 0x0027 111 107 021 Pre-fail Always - 7441 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 488 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 3 9 Power_On_Hours 0x0032 093 093 000 Old_age Always - 5568 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 290 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 23 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 464 194 Temperature_Celsius 0x0022 114 098 000 Old_age Always - 33 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 1 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. --PNTmBPCT7hxwcZjr Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="sysctl.kstat.txt" kstat.zfs.misc.arcstats.hits: 10478566 kstat.zfs.misc.arcstats.misses: 2896913 kstat.zfs.misc.arcstats.demand_data_hits: 7189095 kstat.zfs.misc.arcstats.demand_data_misses: 2588712 kstat.zfs.misc.arcstats.demand_metadata_hits: 3289471 kstat.zfs.misc.arcstats.demand_metadata_misses: 308201 kstat.zfs.misc.arcstats.prefetch_data_hits: 0 kstat.zfs.misc.arcstats.prefetch_data_misses: 0 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0 kstat.zfs.misc.arcstats.mru_hits: 4075848 kstat.zfs.misc.arcstats.mru_ghost_hits: 184856 kstat.zfs.misc.arcstats.mfu_hits: 6402718 kstat.zfs.misc.arcstats.mfu_ghost_hits: 59708 kstat.zfs.misc.arcstats.deleted: 2809459 kstat.zfs.misc.arcstats.recycle_miss: 2950531 kstat.zfs.misc.arcstats.mutex_miss: 3149 kstat.zfs.misc.arcstats.evict_skip: 34090903 kstat.zfs.misc.arcstats.hash_elements: 12170 kstat.zfs.misc.arcstats.hash_elements_max: 20747 kstat.zfs.misc.arcstats.hash_collisions: 363681 kstat.zfs.misc.arcstats.hash_chains: 1727 kstat.zfs.misc.arcstats.hash_chain_max: 5 kstat.zfs.misc.arcstats.p: 344023154 kstat.zfs.misc.arcstats.c: 414288960 kstat.zfs.misc.arcstats.c_min: 53456640 kstat.zfs.misc.arcstats.c_max: 427653120 kstat.zfs.misc.arcstats.size: 409855944 kstat.zfs.misc.arcstats.hdr_size: 2531984 kstat.zfs.misc.arcstats.l2_hits: 0 kstat.zfs.misc.arcstats.l2_misses: 0 kstat.zfs.misc.arcstats.l2_feeds: 0 kstat.zfs.misc.arcstats.l2_rw_clash: 0 kstat.zfs.misc.arcstats.l2_writes_sent: 0 kstat.zfs.misc.arcstats.l2_writes_done: 0 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0 kstat.zfs.misc.arcstats.l2_evict_reading: 0 kstat.zfs.misc.arcstats.l2_free_on_write: 0 kstat.zfs.misc.arcstats.l2_abort_lowmem: 0 kstat.zfs.misc.arcstats.l2_cksum_bad: 0 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 0 kstat.zfs.misc.arcstats.l2_hdr_size: 0 kstat.zfs.misc.arcstats.memory_throttle_count: 39958287 --PNTmBPCT7hxwcZjr Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="sysctl.kstat.after_reboot.txt" kstat.zfs.misc.arcstats.hits: 70211 kstat.zfs.misc.arcstats.misses: 4902 kstat.zfs.misc.arcstats.demand_data_hits: 50240 kstat.zfs.misc.arcstats.demand_data_misses: 2813 kstat.zfs.misc.arcstats.demand_metadata_hits: 19971 kstat.zfs.misc.arcstats.demand_metadata_misses: 2089 kstat.zfs.misc.arcstats.prefetch_data_hits: 0 kstat.zfs.misc.arcstats.prefetch_data_misses: 0 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0 kstat.zfs.misc.arcstats.mru_hits: 28017 kstat.zfs.misc.arcstats.mru_ghost_hits: 8 kstat.zfs.misc.arcstats.mfu_hits: 42194 kstat.zfs.misc.arcstats.mfu_ghost_hits: 0 kstat.zfs.misc.arcstats.deleted: 195 kstat.zfs.misc.arcstats.recycle_miss: 0 kstat.zfs.misc.arcstats.mutex_miss: 0 kstat.zfs.misc.arcstats.evict_skip: 1571 kstat.zfs.misc.arcstats.hash_elements: 6036 kstat.zfs.misc.arcstats.hash_elements_max: 6043 kstat.zfs.misc.arcstats.hash_collisions: 781 kstat.zfs.misc.arcstats.hash_chains: 454 kstat.zfs.misc.arcstats.hash_chain_max: 3 kstat.zfs.misc.arcstats.p: 283834368 kstat.zfs.misc.arcstats.c: 427653120 kstat.zfs.misc.arcstats.c_min: 53456640 kstat.zfs.misc.arcstats.c_max: 427653120 kstat.zfs.misc.arcstats.size: 409581888 kstat.zfs.misc.arcstats.hdr_size: 1261312 kstat.zfs.misc.arcstats.l2_hits: 0 kstat.zfs.misc.arcstats.l2_misses: 0 kstat.zfs.misc.arcstats.l2_feeds: 0 kstat.zfs.misc.arcstats.l2_rw_clash: 0 kstat.zfs.misc.arcstats.l2_writes_sent: 0 kstat.zfs.misc.arcstats.l2_writes_done: 0 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0 kstat.zfs.misc.arcstats.l2_evict_reading: 0 kstat.zfs.misc.arcstats.l2_free_on_write: 0 kstat.zfs.misc.arcstats.l2_abort_lowmem: 0 kstat.zfs.misc.arcstats.l2_cksum_bad: 0 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 0 kstat.zfs.misc.arcstats.l2_hdr_size: 0 kstat.zfs.misc.arcstats.memory_throttle_count: 0 --PNTmBPCT7hxwcZjr-- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 01:11:27 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 52D4D1065672; Mon, 5 Apr 2010 01:11:27 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 299408FC1C; Mon, 5 Apr 2010 01:11:27 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o351BRVB084763; Mon, 5 Apr 2010 01:11:27 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o351BR2C084759; Mon, 5 Apr 2010 01:11:27 GMT (envelope-from linimon) Date: Mon, 5 Apr 2010 01:11:27 GMT Message-Id: <201004050111.o351BR2C084759@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-amd64@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/145238: [zfs] [panic] kernel panic on zpool clear tank X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 01:11:27 -0000 Old Synopsis: kernel panic on zpool clear tank New Synopsis: [zfs] [panic] kernel panic on zpool clear tank Responsible-Changed-From-To: freebsd-amd64->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Apr 5 01:11:02 UTC 2010 Responsible-Changed-Why: Probably not amd64-specific. Assign to fs team. http://www.freebsd.org/cgi/query-pr.cgi?pr=145238 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 01:12:25 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DF7B2106564A; Mon, 5 Apr 2010 01:12:25 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id B5D788FC12; Mon, 5 Apr 2010 01:12:25 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o351CPXf084932; Mon, 5 Apr 2010 01:12:25 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o351CPK0084928; Mon, 5 Apr 2010 01:12:25 GMT (envelope-from linimon) Date: Mon, 5 Apr 2010 01:12:25 GMT Message-Id: <201004050112.o351CPK0084928@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/145229: [zfs] Vast differences in ZFS ARC behavior between 8.0-RC1 and 8.0-RELEASE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 01:12:26 -0000 Old Synopsis: Vast differences in ZFS ARC behavior between 8.0-RC1 and 8.0-RELEASE New Synopsis: [zfs] Vast differences in ZFS ARC behavior between 8.0-RC1 and 8.0-RELEASE Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Apr 5 01:12:07 UTC 2010 Responsible-Changed-Why: Reclassify and reassign. http://www.freebsd.org/cgi/query-pr.cgi?pr=145229 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 01:13:19 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 79F07106564A; Mon, 5 Apr 2010 01:13:19 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 50F218FC12; Mon, 5 Apr 2010 01:13:19 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o351DJMU085808; Mon, 5 Apr 2010 01:13:19 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o351DJoV085804; Mon, 5 Apr 2010 01:13:19 GMT (envelope-from linimon) Date: Mon, 5 Apr 2010 01:13:19 GMT Message-Id: <201004050113.o351DJoV085804@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/145189: [nfs] nfsd performs abysmally under load X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 01:13:19 -0000 Old Synopsis: nfsd performs abysmally under load New Synopsis: [nfs] nfsd performs abysmally under load Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Apr 5 01:13:00 UTC 2010 Responsible-Changed-Why: reclassify. http://www.freebsd.org/cgi/query-pr.cgi?pr=145189 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 03:08:28 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E7AC6106566B for ; Mon, 5 Apr 2010 03:08:28 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id 90AA68FC0C for ; Mon, 5 Apr 2010 03:08:28 +0000 (UTC) Received: from volatile.chemikals.org (adsl-67-123-77.shv.bellsouth.net [98.67.123.77]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id CE9AE809337F; Sun, 4 Apr 2010 22:08:26 -0500 (CDT) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.4/8.14.4) with ESMTP id o3538LWv015098; Sun, 4 Apr 2010 22:08:22 -0500 (CDT) (envelope-from morganw@chemikals.org) Date: Sun, 4 Apr 2010 22:08:21 -0500 (CDT) From: Wes Morgan X-X-Sender: morganw@volatile To: Mikle In-Reply-To: <20100404191844.GA5071@takino.homeftp.org> Message-ID: References: <20100404191844.GA5071@takino.homeftp.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: clamav-milter 0.95.3 at warped X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 03:08:29 -0000 On Sun, 4 Apr 2010, Mikle wrote: > Hello, list! I've got some strange problem with one-disk zfs-pool: > read/write performance for the files on the fs (dd if=/dev/zero > of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading > from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me > ~70MB/s. pool is about 80% full; PC with the pool has 2GB of ram (1.5 of > which is free); i've done no tuning in loader.conf and sysctl.conf for > zfs. In dmesg there is no error-messages related to the disk (dmesg|grep > ^ad12); s.m.a.r.t. seems OK. Some time ago disk was OK, nothing in > software/hardware has changed from that day. Any ideas what could have > happen to the disk? Has it ever been close to 100% full? How long has it been 80% full and what kind of files are on it, size wise? From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 06:56:34 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C6C2E106566B for ; Mon, 5 Apr 2010 06:56:34 +0000 (UTC) (envelope-from nekoexmachina@gmail.com) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.155]) by mx1.freebsd.org (Postfix) with ESMTP id 521EF8FC12 for ; Mon, 5 Apr 2010 06:56:34 +0000 (UTC) Received: by fg-out-1718.google.com with SMTP id d23so1008903fga.13 for ; Sun, 04 Apr 2010 23:56:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=SJb8XVRnlmdqTx4aF5IiRpExskpCCgO6y+yIx5PDJxo=; b=cs5Jdu9CwMtJZj55y1+A00wOfwGeBCq/BtmgaYpHlQBRSvKP+wyLgYqUEdnxORvPxF HjrkGiHfkyttKnc4ni25e3SZ1JqOMgUX5x1FuuG3HNFRCxQuVZ9FUM/B8vYbfMQKRaVV YU2ox2UDWvlrHPpBSpoSyiljiuSP7jgEstAKs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=xLpMUqt5+JL683aHyca7EkYqx/qVGvgyfMDuftxVN3eT4ZfyUP3l33t/Naz1MxCS+x yyrl9xoAhcbSninzHuyssWHztHTQPriNk7Aga+L2jy8coZDR7Zn58Rg5/g+TQqN7w5i0 U6RlBEYFZ5WkAa1gKFfGP8I2nnNJws7b2nHnA= Received: by 10.87.65.38 with SMTP id s38mr8264334fgk.71.1270450593289; Sun, 04 Apr 2010 23:56:33 -0700 (PDT) Received: from localhost ([188.134.12.208]) by mx.google.com with ESMTPS id 28sm11962223fkx.36.2010.04.04.23.56.32 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 04 Apr 2010 23:56:32 -0700 (PDT) Date: Mon, 5 Apr 2010 10:55:00 +0400 From: Mikle Krutov To: Wes Morgan Message-ID: <20100405065500.GB48707@takino.homeftp.org> References: <20100404191844.GA5071@takino.homeftp.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 06:56:34 -0000 On Sun, Apr 04, 2010 at 10:08:21PM -0500, Wes Morgan wrote: > On Sun, 4 Apr 2010, Mikle wrote: > > > Hello, list! I've got some strange problem with one-disk zfs-pool: > > read/write performance for the files on the fs (dd if=/dev/zero > > of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading > > from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me > > ~70MB/s. pool is about 80% full; PC with the pool has 2GB of ram (1.5 of > > which is free); i've done no tuning in loader.conf and sysctl.conf for > > zfs. In dmesg there is no error-messages related to the disk (dmesg|grep > > ^ad12); s.m.a.r.t. seems OK. Some time ago disk was OK, nothing in > > software/hardware has changed from that day. Any ideas what could have > > happen to the disk? > > Has it ever been close to 100% full? How long has it been 80% full and > what kind of files are on it, size wise? No, it was never full. It is at 80% for about a week maybe. Most of the files are the video of the 200MB - 1.5GB size per file. -- Wbr, Krutov Mikle From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 07:31:02 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4B5B71065670 for ; Mon, 5 Apr 2010 07:31:02 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta13.westchester.pa.mail.comcast.net (qmta13.westchester.pa.mail.comcast.net [76.96.59.243]) by mx1.freebsd.org (Postfix) with ESMTP id D9C6E8FC19 for ; Mon, 5 Apr 2010 07:31:01 +0000 (UTC) Received: from omta21.westchester.pa.mail.comcast.net ([76.96.62.72]) by qmta13.westchester.pa.mail.comcast.net with comcast id 1jX11e0021ZXKqc5DjX2Ar; Mon, 05 Apr 2010 07:31:02 +0000 Received: from koitsu.dyndns.org ([98.248.46.159]) by omta21.westchester.pa.mail.comcast.net with comcast id 1jai1e0013S48mS3hjai04; Mon, 05 Apr 2010 07:34:43 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 8D1179B419; Mon, 5 Apr 2010 00:30:59 -0700 (PDT) Date: Mon, 5 Apr 2010 00:30:59 -0700 From: Jeremy Chadwick To: freebsd-fs@freebsd.org Message-ID: <20100405073059.GA68655@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.20 (2009-06-14) Subject: Fwd: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 07:31:02 -0000 I'm not sure why this mail didn't make it to the mailing list (I do see it CC'd). The attachments are included inline. SMART stats for the disk look fine, so the disk is unlikely to be responsible for this issue. OP, could you also please provide the output of "atacontrol cap ad12"? The arcstats entry that interested me the most was this (prior to the reboot): > kstat.zfs.misc.arcstats.memory_throttle_count: 39958287 The box probably needs tuning in /boot/loader.conf to relieve this problem. Below are values I've been using on our production systems for a month or two now. These are for machines with 8GB RAM installed. The OP may need to adjust the first two parameters (I tend to go with RAM/2 for vm.kmem_size and then subtract a bit more for arc_max (in this case 512MB less than kmem_size)). # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory. vm.kmem_size="4096M" vfs.zfs.arc_max="3584M" # Disable ZFS prefetching # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html # Increases overall speed of ZFS, but when disk flushing/writes occur, # system is less responsive (due to extreme disk I/O). # NOTE: 8.0-RC1 disables this by default on systems <= 4GB RAM anyway # NOTE: System has 8GB of RAM, so prefetch would be enabled by default. vfs.zfs.prefetch_disable="1" # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. This # should increase throughput and decrease the "bursty" stalls that # happen during immense I/O with ZFS. # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html vfs.zfs.txg.timeout="5" -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | ----- Forwarded message from Mikle ----- > From: Mikle > To: Jeremy Chadwick > Date: Mon, 5 Apr 2010 01:25:36 +0400 > Cc: freebsd-fs@freebsd.org > Subject: Re: Strange ZFS performance > > On Sun, Apr 04, 2010 at 01:41:27PM -0700, Jeremy Chadwick wrote: > > Please provide the following output: > > > > 1) uname -a > > 2) sysctl kstat.zfs.misc.arcstats > > 3) smartctl -a /dev/ad12 > FreeBSD takino.zet 8.0-STABLE FreeBSD 8.0-STABLE #0: Mon Mar 8 06:25:34 MSK 2010 root@takino.zet:/usr/obj/usr/src/sys/TAKINO amd64 > (TAKINO is pretty basic untuned config, generic config plus ipfw-related things minus '-g' debug flag) > sysctl & smart outputs are in the attaches. > > Also, does rebooting the box restore write speed (yes, this is a serious > > question/recommendation)? > Yes, it did slightly: after reboot i got 6MB/s. > > Also, one more (may be) related thing: there was a power-crash some time ago. > > Wbr, > smartctl 5.39.1 2010-01-28 r3054 [FreeBSD 8.0-STABLE amd64] (local build) > Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net > > === START OF INFORMATION SECTION === > Model Family: Western Digital Caviar Green family > Device Model: WDC WD10EADS-00M2B0 > Serial Number: WD-WMAV50024981 > Firmware Version: 01.00A01 > User Capacity: 1,000,204,886,016 bytes > Device is: In smartctl database [for details use: -P show] > ATA Version is: 8 > ATA Standard is: Exact ATA specification draft version not indicated > Local Time is: Mon Apr 5 00:53:13 2010 MSD > SMART support is: Available - device has SMART capability. > SMART support is: Enabled > > === START OF READ SMART DATA SECTION === > SMART overall-health self-assessment test result: PASSED > > General SMART Values: > Offline data collection status: (0x84) Offline data collection activity > was suspended by an interrupting command from host. > Auto Offline Data Collection: Enabled. > Self-test execution status: ( 0) The previous self-test routine completed > without error or no self-test has ever > been run. > Total time to complete Offline > data collection: (19980) seconds. > Offline data collection > capabilities: (0x7b) SMART execute Offline immediate. > Auto Offline data collection on/off support. > Suspend Offline collection upon new > command. > Offline surface scan supported. > Self-test supported. > Conveyance Self-test supported. > Selective Self-test supported. > SMART capabilities: (0x0003) Saves SMART data before entering > power-saving mode. > Supports SMART auto save timer. > Error logging capability: (0x01) Error logging supported. > General Purpose Logging supported. > Short self-test routine > recommended polling time: ( 2) minutes. > Extended self-test routine > recommended polling time: ( 230) minutes. > Conveyance self-test routine > recommended polling time: ( 5) minutes. > SCT capabilities: (0x303f) SCT Status supported. > SCT Feature Control supported. > SCT Data Table supported. > > SMART Attributes Data Structure revision number: 16 > Vendor Specific SMART Attributes with Thresholds: > ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE > 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 1 > 3 Spin_Up_Time 0x0027 111 107 021 Pre-fail Always - 7441 > 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 488 > 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 > 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 3 > 9 Power_On_Hours 0x0032 093 093 000 Old_age Always - 5568 > 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 > 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 > 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 290 > 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 23 > 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 464 > 194 Temperature_Celsius 0x0022 114 098 000 Old_age Always - 33 > 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 > 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 > 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 > 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 1 > 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 > > SMART Error Log Version: 1 > No Errors Logged > > SMART Self-test log structure revision number 1 > No self-tests have been logged. [To run self-tests, use: smartctl -t] > > > SMART Selective self-test log data structure revision number 1 > SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS > 1 0 0 Not_testing > 2 0 0 Not_testing > 3 0 0 Not_testing > 4 0 0 Not_testing > 5 0 0 Not_testing > Selective self-test flags (0x0): > After scanning selected spans, do NOT read-scan remainder of disk. > If Selective self-test is pending on power-up, resume after 0 minute delay. > > kstat.zfs.misc.arcstats.hits: 10478566 > kstat.zfs.misc.arcstats.misses: 2896913 > kstat.zfs.misc.arcstats.demand_data_hits: 7189095 > kstat.zfs.misc.arcstats.demand_data_misses: 2588712 > kstat.zfs.misc.arcstats.demand_metadata_hits: 3289471 > kstat.zfs.misc.arcstats.demand_metadata_misses: 308201 > kstat.zfs.misc.arcstats.prefetch_data_hits: 0 > kstat.zfs.misc.arcstats.prefetch_data_misses: 0 > kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0 > kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0 > kstat.zfs.misc.arcstats.mru_hits: 4075848 > kstat.zfs.misc.arcstats.mru_ghost_hits: 184856 > kstat.zfs.misc.arcstats.mfu_hits: 6402718 > kstat.zfs.misc.arcstats.mfu_ghost_hits: 59708 > kstat.zfs.misc.arcstats.deleted: 2809459 > kstat.zfs.misc.arcstats.recycle_miss: 2950531 > kstat.zfs.misc.arcstats.mutex_miss: 3149 > kstat.zfs.misc.arcstats.evict_skip: 34090903 > kstat.zfs.misc.arcstats.hash_elements: 12170 > kstat.zfs.misc.arcstats.hash_elements_max: 20747 > kstat.zfs.misc.arcstats.hash_collisions: 363681 > kstat.zfs.misc.arcstats.hash_chains: 1727 > kstat.zfs.misc.arcstats.hash_chain_max: 5 > kstat.zfs.misc.arcstats.p: 344023154 > kstat.zfs.misc.arcstats.c: 414288960 > kstat.zfs.misc.arcstats.c_min: 53456640 > kstat.zfs.misc.arcstats.c_max: 427653120 > kstat.zfs.misc.arcstats.size: 409855944 > kstat.zfs.misc.arcstats.hdr_size: 2531984 > kstat.zfs.misc.arcstats.l2_hits: 0 > kstat.zfs.misc.arcstats.l2_misses: 0 > kstat.zfs.misc.arcstats.l2_feeds: 0 > kstat.zfs.misc.arcstats.l2_rw_clash: 0 > kstat.zfs.misc.arcstats.l2_writes_sent: 0 > kstat.zfs.misc.arcstats.l2_writes_done: 0 > kstat.zfs.misc.arcstats.l2_writes_error: 0 > kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0 > kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0 > kstat.zfs.misc.arcstats.l2_evict_reading: 0 > kstat.zfs.misc.arcstats.l2_free_on_write: 0 > kstat.zfs.misc.arcstats.l2_abort_lowmem: 0 > kstat.zfs.misc.arcstats.l2_cksum_bad: 0 > kstat.zfs.misc.arcstats.l2_io_error: 0 > kstat.zfs.misc.arcstats.l2_size: 0 > kstat.zfs.misc.arcstats.l2_hdr_size: 0 > kstat.zfs.misc.arcstats.memory_throttle_count: 39958287 > kstat.zfs.misc.arcstats.hits: 70211 > kstat.zfs.misc.arcstats.misses: 4902 > kstat.zfs.misc.arcstats.demand_data_hits: 50240 > kstat.zfs.misc.arcstats.demand_data_misses: 2813 > kstat.zfs.misc.arcstats.demand_metadata_hits: 19971 > kstat.zfs.misc.arcstats.demand_metadata_misses: 2089 > kstat.zfs.misc.arcstats.prefetch_data_hits: 0 > kstat.zfs.misc.arcstats.prefetch_data_misses: 0 > kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0 > kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0 > kstat.zfs.misc.arcstats.mru_hits: 28017 > kstat.zfs.misc.arcstats.mru_ghost_hits: 8 > kstat.zfs.misc.arcstats.mfu_hits: 42194 > kstat.zfs.misc.arcstats.mfu_ghost_hits: 0 > kstat.zfs.misc.arcstats.deleted: 195 > kstat.zfs.misc.arcstats.recycle_miss: 0 > kstat.zfs.misc.arcstats.mutex_miss: 0 > kstat.zfs.misc.arcstats.evict_skip: 1571 > kstat.zfs.misc.arcstats.hash_elements: 6036 > kstat.zfs.misc.arcstats.hash_elements_max: 6043 > kstat.zfs.misc.arcstats.hash_collisions: 781 > kstat.zfs.misc.arcstats.hash_chains: 454 > kstat.zfs.misc.arcstats.hash_chain_max: 3 > kstat.zfs.misc.arcstats.p: 283834368 > kstat.zfs.misc.arcstats.c: 427653120 > kstat.zfs.misc.arcstats.c_min: 53456640 > kstat.zfs.misc.arcstats.c_max: 427653120 > kstat.zfs.misc.arcstats.size: 409581888 > kstat.zfs.misc.arcstats.hdr_size: 1261312 > kstat.zfs.misc.arcstats.l2_hits: 0 > kstat.zfs.misc.arcstats.l2_misses: 0 > kstat.zfs.misc.arcstats.l2_feeds: 0 > kstat.zfs.misc.arcstats.l2_rw_clash: 0 > kstat.zfs.misc.arcstats.l2_writes_sent: 0 > kstat.zfs.misc.arcstats.l2_writes_done: 0 > kstat.zfs.misc.arcstats.l2_writes_error: 0 > kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0 > kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0 > kstat.zfs.misc.arcstats.l2_evict_reading: 0 > kstat.zfs.misc.arcstats.l2_free_on_write: 0 > kstat.zfs.misc.arcstats.l2_abort_lowmem: 0 > kstat.zfs.misc.arcstats.l2_cksum_bad: 0 > kstat.zfs.misc.arcstats.l2_io_error: 0 > kstat.zfs.misc.arcstats.l2_size: 0 > kstat.zfs.misc.arcstats.l2_hdr_size: 0 > kstat.zfs.misc.arcstats.memory_throttle_count: 0 ----- End forwarded message ----- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 09:54:22 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 12BB9106564A for ; Mon, 5 Apr 2010 09:54:22 +0000 (UTC) (envelope-from nekoexmachina@gmail.com) Received: from mail-fx0-f209.google.com (mail-fx0-f209.google.com [209.85.220.209]) by mx1.freebsd.org (Postfix) with ESMTP id 970508FC15 for ; Mon, 5 Apr 2010 09:54:21 +0000 (UTC) Received: by fxm1 with SMTP id 1so2580929fxm.13 for ; Mon, 05 Apr 2010 02:54:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=R2zXdKIWECWG97sFah0T01CEp1j1PNwnwhO/PcggneQ=; b=lYtIoe7+bNvCjtcYieKUMkvdJGDtwsI/HbbBwdHKxJJtP3sqo1dxM7y0EYCxAmEWQ5 jcCMrY5LO5H4nyTq9/H2QJ/phoAgchR01WgIu4VEIK/1B05NyctRl30m3wgdfnUK33LL 4Lb/enbPfa4Yi6OGTjmzHax0vYCUQRz8rV2yI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=sDJe8mPH0kvSQhAOPE7UuOFtNarl+rIha+3ZFT/A1cHVYSH6maJ2KUaPboQz3sPr2f nHHMJI4sj9GYdrV+CMETpFqFzLF4RVymSIwyYQmhbDB8TaJyyR/vLm1Fzf1sDL5dPrJm S5rLG6JlHq0Kjslpv+/uXVK+nWwtWbGM1SoHw= Received: by 10.223.5.69 with SMTP id 5mr5405604fau.8.1270461260436; Mon, 05 Apr 2010 02:54:20 -0700 (PDT) Received: from localhost ([188.134.12.208]) by mx.google.com with ESMTPS id 1sm9970472fkt.11.2010.04.05.02.54.19 (version=TLSv1/SSLv3 cipher=RC4-MD5); Mon, 05 Apr 2010 02:54:20 -0700 (PDT) Date: Mon, 5 Apr 2010 13:52:45 +0400 From: Mikle Krutov To: Jeremy Chadwick Message-ID: <20100405095245.GA1152@takino.homeftp.org> References: <20100405073059.GA68655@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100405073059.GA68655@icarus.home.lan> User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: Fwd: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 09:54:22 -0000 On Mon, Apr 05, 2010 at 12:30:59AM -0700, Jeremy Chadwick wrote: > I'm not sure why this mail didn't make it to the mailing list (I do see > it CC'd). The attachments are included inline. > > SMART stats for the disk look fine, so the disk is unlikely to be > responsible for this issue. OP, could you also please provide the > output of "atacontrol cap ad12"? > > The arcstats entry that interested me the most was this (prior to the > reboot): > > > kstat.zfs.misc.arcstats.memory_throttle_count: 39958287 > > The box probably needs tuning in /boot/loader.conf to relieve this > problem. > > Below are values I've been using on our production systems for a month > or two now. These are for machines with 8GB RAM installed. The OP may > need to adjust the first two parameters (I tend to go with RAM/2 for > vm.kmem_size and then subtract a bit more for arc_max (in this case > 512MB less than kmem_size)). > > # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory. > vm.kmem_size="4096M" > vfs.zfs.arc_max="3584M" > > # Disable ZFS prefetching > # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html > # Increases overall speed of ZFS, but when disk flushing/writes occur, > # system is less responsive (due to extreme disk I/O). > # NOTE: 8.0-RC1 disables this by default on systems <= 4GB RAM anyway > # NOTE: System has 8GB of RAM, so prefetch would be enabled by default. > vfs.zfs.prefetch_disable="1" > > # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. This > # should increase throughput and decrease the "bursty" stalls that > # happen during immense I/O with ZFS. > # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html > # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html > vfs.zfs.txg.timeout="5" I've tried that tuning, now i have: vm.kmem_size="1024M" vfs.zfs.arc_max="512M" vfs.zfs.txg.timeout="5" No change in perfomance. Also, now reading directrly from hdd is slow, too (22-30MB/s), so that shows me that this could be some hardware problem (sata controller? but than the other disks were in same situation too. also, i've thought that that could be sata cable - and changed it - no speed after this). Additional information for dd: dd if=/dev/zero of=./file bs=4M count=10 41943040 bytes transferred in 0.039295 secs (1067389864 bytes/sec) dd if=/dev/zero of=./file bs=4M count=20 83886080 bytes transferred in 0.076702 secs (1093663943 bytes/sec) dd if=/dev/zero of=./file bs=4M count=30 125829120 bytes transferred in 0.114576 secs (1098216647 bytes/sec) dd if=/dev/zero of=./file bs=4M count=40 167772160 bytes transferred in 0.174362 secs (962206293 bytes/sec) dd if=/dev/zero of=./file bs=4M count=50 209715200 bytes transferred in 45.636052 secs (4595384 bytes/sec) done without any delay, all the time zpool iostat & gstat are showing me 100KB/s-3MB/s Any information i could provide to help us know what's the source of the problem? -- Wbr, Krutov Mikle From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 10:59:52 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 46591106564A for ; Mon, 5 Apr 2010 10:59:52 +0000 (UTC) (envelope-from morganw@chemikals.org) Received: from warped.bluecherry.net (unknown [IPv6:2001:440:eeee:fffb::2]) by mx1.freebsd.org (Postfix) with ESMTP id C9B748FC14 for ; Mon, 5 Apr 2010 10:59:51 +0000 (UTC) Received: from volatile.chemikals.org (adsl-67-123-77.shv.bellsouth.net [98.67.123.77]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by warped.bluecherry.net (Postfix) with ESMTPSA id 93084809337F; Mon, 5 Apr 2010 05:59:50 -0500 (CDT) Received: from localhost (morganw@localhost [127.0.0.1]) by volatile.chemikals.org (8.14.4/8.14.4) with ESMTP id o35Axlpi041439; Mon, 5 Apr 2010 05:59:47 -0500 (CDT) (envelope-from morganw@chemikals.org) Date: Mon, 5 Apr 2010 05:59:47 -0500 (CDT) From: Wes Morgan X-X-Sender: morganw@volatile To: Mikle Krutov In-Reply-To: <20100405065500.GB48707@takino.homeftp.org> Message-ID: References: <20100404191844.GA5071@takino.homeftp.org> <20100405065500.GB48707@takino.homeftp.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Virus-Scanned: clamav-milter 0.95.3 at warped X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 10:59:52 -0000 On Mon, 5 Apr 2010, Mikle Krutov wrote: > On Sun, Apr 04, 2010 at 10:08:21PM -0500, Wes Morgan wrote: > > On Sun, 4 Apr 2010, Mikle wrote: > > > > > Hello, list! I've got some strange problem with one-disk zfs-pool: > > > read/write performance for the files on the fs (dd if=/dev/zero > > > of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading > > > from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me > > > ~70MB/s. pool is about 80% full; PC with the pool has 2GB of ram (1.5 of > > > which is free); i've done no tuning in loader.conf and sysctl.conf for > > > zfs. In dmesg there is no error-messages related to the disk (dmesg|grep > > > ^ad12); s.m.a.r.t. seems OK. Some time ago disk was OK, nothing in > > > software/hardware has changed from that day. Any ideas what could have > > > happen to the disk? > > > > Has it ever been close to 100% full? How long has it been 80% full and > > what kind of files are on it, size wise? > No, it was never full. It is at 80% for about a week maybe. Most of the files are the video of the 200MB - 1.5GB size per file. I'm wondering if your pool is fragmented. What does gstat or iostat -x output for the device look like when you're doing accessing the raw device versus filesystem access? A very interesting experiment (to me) would be to try these things: 1) using dd to replicate the disc to another disc, block for block 2) zfs send to a newly created, empty pool (could take a while!) Then, without rebooting, compare the performance of the "new" pools. For #1 you would need to export the pool first and detach the original device before importing the duplicate. There might be a script out there somewhere to parse the output from zdb and turn it into a block map to identify fragmentation, but I'm not aware of one. If you did find that was the case, currently the only fix is to rebuild the pool. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 11:07:00 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 18CA21065677 for ; Mon, 5 Apr 2010 11:07:00 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D7F698FC1F for ; Mon, 5 Apr 2010 11:06:59 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o35B6xnc027789 for ; Mon, 5 Apr 2010 11:06:59 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o35B6xwj027787 for freebsd-fs@FreeBSD.org; Mon, 5 Apr 2010 11:06:59 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 5 Apr 2010 11:06:59 GMT Message-Id: <201004051106.o35B6xwj027787@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 11:07:00 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/145339 fs [zfs] deadlock after detaching block device from raidz o kern/145309 fs [disklabel]: Editing disk label invalidates the whole o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c o kern/144458 fs [nfs] [patch] nfsd fails as a kld o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144330 fs [nfs] mbuf leakage in nfsd with zfs o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o bin/144214 fs zfsboot fails on gang block after upgrade to zfs v14 o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o kern/143345 fs [ext2fs] [patch] extfs minor header cleanups to better o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142924 fs [ext2fs] [patch] Small cleanup for the inode struct in o kern/142914 fs [zfs] ZFS performance degradation over time o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142401 fs [ntfs] [patch] Minor updates to NTFS from NetBSD o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140433 fs [zfs] [panic] panic while replaying ZIL after crash o kern/140134 fs [msdosfs] write and fsck destroy filesystem integrity o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs o bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/139363 fs [nfs] diskless root nfs mount from non FreeBSD server o kern/138790 fs [zfs] ZFS ceases caching when mem demand is high o kern/138524 fs [msdosfs] disks and usb flashes/cards with Russian lab o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb f kern/137037 fs [zfs] [hang] zfs rollback on root causes FreeBSD to fr o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic o kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133614 fs [panic] panic: ffs_truncate: read-only filesystem o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int f kern/133150 fs [zfs] Page fault with ZFS on 7.1-RELEASE/amd64 while w o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130979 fs [smbfs] [panic] boot/kernel/smbfs.ko o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130229 fs [iconv] usermount fails on fs that need iconv o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129059 fs [zfs] [patch] ZFS bootloader whitelistable via WITHOUT f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS p kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition f bin/124424 fs [zfs] zfs(8): zfs list -r shows strange snapshots' siz o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121779 fs [ufs] snapinfo(8) (and related tools?) only work for t o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [fs] [snapshot] System crashes when manipulati o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/119735 fs [zfs] geli + ZFS + samba starting on boot panics 7.0-B o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116913 fs [ffs] [panic] ffs_blkfree: freeing free block p kern/116608 fs [msdosfs] [patch] msdosfs fails to check mount options o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/115645 fs [snapshots] [panic] lockmgr: thread 0xc4c00d80, not ex o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] mount_msdosfs: msdosfs_iconv: Operation not o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [iso9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna f kern/91568 fs [ufs] [panic] writing to UFS/softupdates DVD media in o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o kern/85326 fs [smbfs] [panic] saving a file via samba to an overquot o kern/84589 fs [2TB] 5.4-STABLE unresponsive during background fsck 2 o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/53137 fs [ffs] [panic] background fscking causing ffs_valloc pa o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/33464 fs [ufs] soft update inconsistencies after system crash o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 170 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 12:27:09 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1A71C106564A for ; Mon, 5 Apr 2010 12:27:09 +0000 (UTC) (envelope-from nekoexmachina@gmail.com) Received: from mail-fx0-f209.google.com (mail-fx0-f209.google.com [209.85.220.209]) by mx1.freebsd.org (Postfix) with ESMTP id 90DDA8FC08 for ; Mon, 5 Apr 2010 12:27:08 +0000 (UTC) Received: by fxm1 with SMTP id 1so2641156fxm.13 for ; Mon, 05 Apr 2010 05:27:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=i9XgeItGQZK8Lv3y625AvkxPFxLEtyEmi3c3tKyKSUo=; b=FmrhrPIhU4HmRhIm0A3L66LMs2w1OX0y/XSQRhBjgKGwnafigf3X2vqc0grMJli+4B 5w/A+rBfH8xsyl1s1+svcDBZYs/NjYB6cop0GSJR/G+LRwYPRlrxt1+QcRG5TvfiHvtM jQdBkQ/Dpt9fo05RHCTrwzpDT9C7JlTPTyuxU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=Es7UNHGTeIk06teOlTKT8PtxGld+/oDQUhgGgMZD7X/4dBE51dKzQY1ENgpZa2TYuB Mfmm+NFHIRiuxp2toQaMpg3fiUzu+8gIyP60XmbYba0cqFSfkpoobX7qiEz6wcI1zM6q 9rYMCSmE27imRLu1C8ES3WHsyMXdqjedriVLY= Received: by 10.223.15.65 with SMTP id j1mr5588813faa.0.1270470427376; Mon, 05 Apr 2010 05:27:07 -0700 (PDT) Received: from localhost ([188.134.12.208]) by mx.google.com with ESMTPS id 22sm26246267fkr.29.2010.04.05.05.27.06 (version=TLSv1/SSLv3 cipher=RC4-MD5); Mon, 05 Apr 2010 05:27:06 -0700 (PDT) Date: Mon, 5 Apr 2010 16:25:32 +0400 From: Mikle Krutov To: Wes Morgan Message-ID: <20100405122532.GA1704@takino.homeftp.org> References: <20100404191844.GA5071@takino.homeftp.org> <20100405065500.GB48707@takino.homeftp.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 12:27:09 -0000 On Mon, Apr 05, 2010 at 05:59:47AM -0500, Wes Morgan wrote: > On Mon, 5 Apr 2010, Mikle Krutov wrote: > > > On Sun, Apr 04, 2010 at 10:08:21PM -0500, Wes Morgan wrote: > > > On Sun, 4 Apr 2010, Mikle wrote: > > > > > > > Hello, list! I've got some strange problem with one-disk zfs-pool: > > > > read/write performance for the files on the fs (dd if=/dev/zero > > > > of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading > > > > from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me > > > > ~70MB/s. pool is about 80% full; PC with the pool has 2GB of ram (1.5 of > > > > which is free); i've done no tuning in loader.conf and sysctl.conf for > > > > zfs. In dmesg there is no error-messages related to the disk (dmesg|grep > > > > ^ad12); s.m.a.r.t. seems OK. Some time ago disk was OK, nothing in > > > > software/hardware has changed from that day. Any ideas what could have > > > > happen to the disk? > > > > > > Has it ever been close to 100% full? How long has it been 80% full and > > > what kind of files are on it, size wise? > > No, it was never full. It is at 80% for about a week maybe. Most of the files are the video of the 200MB - 1.5GB size per file. > > I'm wondering if your pool is fragmented. What does gstat or iostat -x > output for the device look like when you're doing accessing the raw device > versus filesystem access? A very interesting experiment (to me) would be > to try these things: > > 1) using dd to replicate the disc to another disc, block for block > 2) zfs send to a newly created, empty pool (could take a while!) > > Then, without rebooting, compare the performance of the "new" pools. For > #1 you would need to export the pool first and detach the original device > before importing the duplicate. > > There might be a script out there somewhere to parse the output from zdb > and turn it into a block map to identify fragmentation, but I'm not aware > of one. If you did find that was the case, currently the only fix is to > rebuild the pool. device r/s w/s kr/s kw/s wait svc_t %b ad12 18.0 0.0 2302.6 0.0 4 370.0 199 for cp'ing from one pool to another gstat line is: L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 3 22 22 2814 69.0 0 0 0.0 71.7| gpt/pool2 For dd (now performance is crappy, too): L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 1 99 99 12658 14.2 0 0 0.0 140.4| gpt/pool2 Unfortunately, i got no free hdd with of same size, so the experiment will take time later. Also, zfs faq from sun tells me: >Q: Are ZFS file systems shrinkable? How about fragmentation? Any need to defrag them? >A: <...> The allocation algorithms are such that defragmentation is not an issue. Is that just marketing crap? p.s. There was some mailing-list issue and we got second thread: Also i forgot to post atacontrol cap ad12 to that thread, here it is: Protocol SATA revision 2.x device model WDC WD10EADS-00M2B0 serial number WD-WMAV50024981 firmware revision 01.00A01 cylinders 16383 heads 16 sectors/track 63 lba supported 268435455 sectors lba48 supported 1953525168 sectors dma supported overlap not supported Feature Support Enable Value Vendor write cache yes yes read ahead yes yes Native Command Queuing (NCQ) yes - 31/0x1F Tagged Command Queuing (TCQ) no no 31/0x1F SMART yes yes microcode download yes yes security yes no power management yes yes advanced power management no no 0/0x00 automatic acoustic management yes no 254/0xFE 128/0x80 http://permalink.gmane.org/gmane.os.freebsd.devel.file-systems/8876 >On Mon, Apr 05, 2010 at 12:30:59AM -0700, Jeremy Chadwick wrote: >> I'm not sure why this mail didn't make it to the mailing list (I do see >> it CC'd). The attachments are included inline. >> >> SMART stats for the disk look fine, so the disk is unlikely to be >> responsible for this issue. OP, could you also please provide the >> output of "atacontrol cap ad12"? >> >> The arcstats entry that interested me the most was this (prior to the >> reboot): >> >> > kstat.zfs.misc.arcstats.memory_throttle_count: 39958287 >> >> The box probably needs tuning in /boot/loader.conf to relieve this >> problem. >> >> Below are values I've been using on our production systems for a month >> or two now. These are for machines with 8GB RAM installed. The OP may >> need to adjust the first two parameters (I tend to go with RAM/2 for >> vm.kmem_size and then subtract a bit more for arc_max (in this case >> 512MB less than kmem_size)). >> >> # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory. >> vm.kmem_size="4096M" >> vfs.zfs.arc_max="3584M" >> >> # Disable ZFS prefetching >> # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html >> # Increases overall speed of ZFS, but when disk flushing/writes occur, >> # system is less responsive (due to extreme disk I/O). >> # NOTE: 8.0-RC1 disables this by default on systems <= 4GB RAM anyway >> # NOTE: System has 8GB of RAM, so prefetch would be enabled by default. >> vfs.zfs.prefetch_disable="1" >> >> # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. This >> # should increase throughput and decrease the "bursty" stalls that >> # happen during immense I/O with ZFS. >> # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html >> # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html >> vfs.zfs.txg.timeout="5" >I've tried that tuning, now i have: >vm.kmem_size="1024M" >vfs.zfs.arc_max="512M" >vfs.zfs.txg.timeout="5" >No change in perfomance. Also, now reading directrly from hdd is slow, too (22-30MB/s), so that shows me >that this could be some hardware problem (sata controller? but than the other disks were in same situation >too. also, i've thought that that could be sata cable - and changed it - no speed after this). >Additional information for dd: >dd if=/dev/zero of=./file bs=4M count=10 >41943040 bytes transferred in 0.039295 secs (1067389864 bytes/sec) > >dd if=/dev/zero of=./file bs=4M count=20 >83886080 bytes transferred in 0.076702 secs (1093663943 bytes/sec) -- Wbr, Krutov Mikle From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 17:24:17 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CF4D6106566B for ; Mon, 5 Apr 2010 17:24:17 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 97CAB8FC17 for ; Mon, 5 Apr 2010 17:24:17 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.13.8+Sun/8.13.8) with ESMTP id o35HOFJW019645; Mon, 5 Apr 2010 12:24:15 -0500 (CDT) Date: Mon, 5 Apr 2010 12:24:15 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Mikle Krutov In-Reply-To: <20100405095245.GA1152@takino.homeftp.org> Message-ID: References: <20100405073059.GA68655@icarus.home.lan> <20100405095245.GA1152@takino.homeftp.org> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Mon, 05 Apr 2010 12:24:16 -0500 (CDT) Cc: freebsd-fs@freebsd.org Subject: Re: Fwd: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 17:24:17 -0000 On Mon, 5 Apr 2010, Mikle Krutov wrote: > Any information i could provide to help us know what's the source of the problem? The svc_t value I saw posted in one of your mails was outrageously large. I suggest running iostat -x 30 while doing long-duration write and see what the actual values are. It seems likely that you are overrunning what your controller or disk is capable of handling and this is creating an I/O backlog which is far greater than what it would be if zfs had queued fewer simultaneous requests. It may be that tuning zfs_vdev_max_pending to a small value may help keep your controller from being overloaded. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 19:18:21 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5C205106566B for ; Mon, 5 Apr 2010 19:18:21 +0000 (UTC) (envelope-from nekoexmachina@gmail.com) Received: from mail-fx0-f209.google.com (mail-fx0-f209.google.com [209.85.220.209]) by mx1.freebsd.org (Postfix) with ESMTP id DCF798FC12 for ; Mon, 5 Apr 2010 19:18:20 +0000 (UTC) Received: by fxm1 with SMTP id 1so2920941fxm.13 for ; Mon, 05 Apr 2010 12:18:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=UXgU3/+E2E8Y1+uFmXc7MLMfArrRs5KRCtiV7hjBzxg=; b=E7n+XRq55HzI57et1dPSCrgFZByWNnBeAyP47m2eSnJdT6QV1k0Zpl5EQ00pIjyeZo F4b8XphIhE3LKABmRX370z/GNZKHl/OoR5fl+BukSoj04pgKW2HYpEMHk7a1F2HhaB0c UMwalQ0AZp2Ci3qVpSQHTfeB/QVZItqk/rQTs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=uKBn5uouultI2HbW/p2tMTfxblLvDsAu3fx0JPM3pwgKRQ0GtZySrc9M4QimST/Gpd EAPg7HoHOq2y+Sn89orPK1US7fZKiFGeXRMTCw4VEpChfaOYth8tWbVNcMGNTEib0Q76 IIfnCLXpDPdxn8qeThia6ndgtuuOR3p6N5LY4= Received: by 10.223.1.146 with SMTP id 18mr6173509faf.53.1270495099702; Mon, 05 Apr 2010 12:18:19 -0700 (PDT) Received: from localhost ([188.134.12.208]) by mx.google.com with ESMTPS id 21sm27080794fks.53.2010.04.05.12.18.18 (version=TLSv1/SSLv3 cipher=RC4-MD5); Mon, 05 Apr 2010 12:18:19 -0700 (PDT) Date: Mon, 5 Apr 2010 23:16:43 +0400 From: Mikle Krutov To: Bob Friesenhahn Message-ID: <20100405191643.GA5996@takino.homeftp.org> References: <20100405073059.GA68655@icarus.home.lan> <20100405095245.GA1152@takino.homeftp.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: Fwd: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 19:18:21 -0000 On Mon, Apr 05, 2010 at 12:24:15PM -0500, Bob Friesenhahn wrote: > On Mon, 5 Apr 2010, Mikle Krutov wrote: > > Any information i could provide to help us know what's the source of the problem? > > The svc_t value I saw posted in one of your mails was outrageously > large. I suggest running > > iostat -x 30 > > while doing long-duration write and see what the actual values are. > It seems likely that you are overrunning what your controller or disk > is capable of handling and this is creating an I/O backlog which is > far greater than what it would be if zfs had queued fewer > simultaneous requests. > > It may be that tuning zfs_vdev_max_pending to a small value may help > keep your controller from being overloaded. > > Bob > -- > Bob Friesenhahn > bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ With copeing from one disk to another (e.g. reading from ad12), svc_t is pretty large (2.0-3.5 on target disk v.s. 11 in the beginning - up to 42 to the end on ad12). For writing it is ~20 in the beginning, and pretty fast it enlarges to 200 Also, if it was the overrun of hdd/controller capability, by my mind, io should have been very slow for some amount of time, while zfs is flushing the data from mem to the hdd, and then it should go up to normal 60MB/s. It does not happen, e.g. if i'm trying to write something just after reboot with no data to be flushed from my mem speed is already 2-6MB/s But still, i've tried to set vfs.zfs.vdev.max_pending to 8 and 4, it did not help. -- Wbr, Krutov Mikle From owner-freebsd-fs@FreeBSD.ORG Mon Apr 5 21:13:25 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B4A511065675; Mon, 5 Apr 2010 21:13:25 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 8BAA88FC12; Mon, 5 Apr 2010 21:13:25 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o35LDPvh059000; Mon, 5 Apr 2010 21:13:25 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o35LDPKX058996; Mon, 5 Apr 2010 21:13:25 GMT (envelope-from linimon) Date: Mon, 5 Apr 2010 21:13:25 GMT Message-Id: <201004052113.o35LDPKX058996@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/145411: [xfs] [panic] Kernel panics shortly after mounting an XFS partition X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2010 21:13:25 -0000 Synopsis: [xfs] [panic] Kernel panics shortly after mounting an XFS partition Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Mon Apr 5 21:13:18 UTC 2010 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=145411 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 05:23:02 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 606D0106566C; Tue, 6 Apr 2010 05:23:02 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 36AD68FC19; Tue, 6 Apr 2010 05:23:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o365N2Jk084922; Tue, 6 Apr 2010 05:23:02 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o365N2sr084918; Tue, 6 Apr 2010 05:23:02 GMT (envelope-from linimon) Date: Tue, 6 Apr 2010 05:23:02 GMT Message-Id: <201004060523.o365N2sr084918@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/145234: [zfs] zvol with org.freebsd:swap=on crashes zfs list X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 05:23:02 -0000 Old Synopsis: zvol with org.freebsd:swap=on crashes zfs list New Synopsis: [zfs] zvol with org.freebsd:swap=on crashes zfs list Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Apr 6 05:21:32 UTC 2010 Responsible-Changed-Why: Even though the bug shows up in a binary, I'm going out on a limb and assigning it to the fs@ mailing list, where most of the other zfs bugs have been assigned. http://www.freebsd.org/cgi/query-pr.cgi?pr=145234 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 07:00:16 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C7140106564A for ; Tue, 6 Apr 2010 07:00:16 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 9B96D8FC16 for ; Tue, 6 Apr 2010 07:00:16 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o3670Gdl065182 for ; Tue, 6 Apr 2010 07:00:16 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o3670Gdp065181; Tue, 6 Apr 2010 07:00:16 GMT (envelope-from gnats) Date: Tue, 6 Apr 2010 07:00:16 GMT Message-Id: <201004060700.o3670Gdp065181@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Andriy Gapon Cc: Subject: Re: kern/145234: [zfs] zvol with org.freebsd:swap=on crashes zfs list X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Andriy Gapon List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 07:00:16 -0000 The following reply was made to PR kern/145234; it has been noted by GNATS. From: Andriy Gapon To: bug-followup@FreeBSD.org, KOT@MATPOCKuH.Ru Cc: Subject: Re: kern/145234: [zfs] zvol with org.freebsd:swap=on crashes zfs list Date: Tue, 06 Apr 2010 09:54:06 +0300 This should be resolved in head now. See r206199 (http://svn.freebsd.org/changeset/base/206199) And kern/145377. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 12:45:24 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 96AAC1065670 for ; Tue, 6 Apr 2010 12:45:24 +0000 (UTC) (envelope-from octavianh@gmail.com) Received: from mail-pv0-f182.google.com (mail-pv0-f182.google.com [74.125.83.182]) by mx1.freebsd.org (Postfix) with ESMTP id 3B7E18FC17 for ; Tue, 6 Apr 2010 12:45:23 +0000 (UTC) Received: by pvc7 with SMTP id 7so2485693pvc.13 for ; Tue, 06 Apr 2010 05:45:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:received:message-id :subject:from:to:content-type; bh=EYRLUjJtjbAdp+uE34YGnMgRKy1AnFzmy1VxqraPioE=; b=nISfFUzd/rzceMPvaavaSf+uXU3fJUFgVgZgXoVrbwqG+QgivLE/GR5+anJHN0lAGo BIqmW11YJ3D5SPdR/VNwiuTPXSggic+/+tTID1wp678hqQbzHDE2D8XMRVTT9rZI0SYI xcwhcf5lt0ZJzIbouVMEN/1m0RfknOuvtBELM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=eQAh+wPjJsUmfQz8NPLn04W0WyjRFCvEQyy90Q/qM+M7lkE+wwkqazA8RpP17G7HsW InttiRUEQvKSaAYbQ9jscwee3gVrO8SXFTa7VjCgVRc7M9szcNmUtcpVuJ5zllFCiTI6 95l1pXt+vdlNJ+H01PgPm4zvBW9Pb/p04SiZY= MIME-Version: 1.0 Received: by 10.142.163.14 with HTTP; Tue, 6 Apr 2010 05:23:24 -0700 (PDT) Date: Tue, 6 Apr 2010 05:23:24 -0700 Received: by 10.143.194.1 with SMTP id w1mr2540743wfp.215.1270556604412; Tue, 06 Apr 2010 05:23:24 -0700 (PDT) Message-ID: From: Octavian Hornoiu To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: NFSv4_ACLs in 8-STABLE with ZFS and Samba X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 12:45:24 -0000 I downloaded FreeBSD 8-STABLE as of 4/01/2010 and i re-compiled my kernel and made sure i was using the latest version of ZFS on my zpool. I then installed libsunacl. I chose to compile samba34 with "acl" support but that didn't seem to work so i re-read the wiki entry here: http://wiki.freebsd.org/NFSv4_ACLs I realized it said that apps had to be compiled by changing "" into " I extracted samba's tar file, replaced all references to sys/acl.h to sunacl.h. I then re-tarred it, replaced the md5/sha256 checksum and compiled the port successfully. When i go to use ACLs it won't allow me to change them. i go to remove an entry or add an entry or change inheritance and i click ok in samba and nothing happens, the next window simply shows them all back again with no modifications. Am i missing something obvious? My smb.conf had the following enabled in the global section: nt acl support = yes inherit acls = yes map acl inherit = yes Also, i checked zfs and it has the following setting for ACLs: data aclmode groupmask default data aclinherit restricted default Any help would be greatly appreciated! Thanks! Octavian From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 13:56:49 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BF6091065674 for ; Tue, 6 Apr 2010 13:56:49 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 121DD8FC1C for ; Tue, 6 Apr 2010 13:56:48 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id QAA00108; Tue, 06 Apr 2010 16:56:47 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4BBB3D9E.3060905@freebsd.org> Date: Tue, 06 Apr 2010 16:56:46 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.24 (X11/20100319) MIME-Version: 1.0 To: freebsd-fs@freebsd.org X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Subject: call for review: avgfs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 13:56:49 -0000 I am asking for all the interested people to review my new cool filesystem avgfs. It is essentially a replacement for ZFS and UFS. OK, OK, this is only a half-joke :-) What I am trying to achieve is to create a simple filesystem that could serve both as a very simple sample/skeleton filesystem and as a demonstration tool. This is what I did: I took some fs code that I am relatively familiar with (UDF filesystem), changed all copyrights to mine :-) and cut off all the unnecessary code. Then I added a little bit of new code. Here's a result: http://people.freebsd.org/~avg/avgfs/ The code is provided as .tgz for download, as .diff for patching and as a bunch of files for online browsing. What the filesystem does: it is readonly; it accepts any disk as a valid image, no metadata necessary; it present a single (root) directory with a single file 'thefile' in it. The file is basically a proxy to the underlying disk: reading n bytes from offset x in the file is, more or less, reading the same bytes from the disk. Internally reads are done using either bread() on a thefile's vnode or bread() on devvp. Currently this is controlled at compile time using "#if 1|0", I plan to turn that into a mount option. bread() is called with up to MAXBSIZE size, depending on the offset and requested size. No read-ahead, no breadn(). These policies probably should also be controlled via mount options. I am sure that the code contains a bunch of bugs. There is probably some code that isn't actually needed, and perhaps some needed code is missing. Likely, some pieces do things incorrectly or do not properly handle some input values. I will be very grateful for any help with improving this code in all respects: its correctness, its quality/readability, its ease of understanding and, most importantly, its proper implementation and use of VFS interfaces. Also, I would like some better name for the filesystem, the one that would reflect its purpose and behavior. But my imagination is not doing well. Thank you! Scott, I felt obliged to CC you as a copyright holder on the original UDF files that I used as the basis. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 14:59:16 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7AA931065675; Tue, 6 Apr 2010 14:59:16 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 505058FC13; Tue, 6 Apr 2010 14:59:16 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o36ExGX3020078; Tue, 6 Apr 2010 14:59:16 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o36ExG4d020074; Tue, 6 Apr 2010 14:59:16 GMT (envelope-from linimon) Date: Tue, 6 Apr 2010 14:59:16 GMT Message-Id: <201004061459.o36ExG4d020074@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/145423: [zfs] ZFS/zpool status shows deleted/not present pools after scrub X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 14:59:16 -0000 Old Synopsis: ZFS/zpool status shows deleted/not present pools after scrub New Synopsis: [zfs] ZFS/zpool status shows deleted/not present pools after scrub Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Apr 6 14:59:02 UTC 2010 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=145423 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 14:59:49 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E7286106566C; Tue, 6 Apr 2010 14:59:49 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id BD3C28FC19; Tue, 6 Apr 2010 14:59:49 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o36Exnxx020144; Tue, 6 Apr 2010 14:59:49 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o36ExnEn020140; Tue, 6 Apr 2010 14:59:49 GMT (envelope-from linimon) Date: Tue, 6 Apr 2010 14:59:49 GMT Message-Id: <201004061459.o36ExnEn020140@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/145424: [zfs] [patch] move source closer to v15 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 14:59:50 -0000 Synopsis: [zfs] [patch] move source closer to v15 Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Apr 6 14:59:42 UTC 2010 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=145424 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 17:34:15 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7280B1065670; Tue, 6 Apr 2010 17:34:15 +0000 (UTC) (envelope-from toasty@dragondata.com) Received: from mail-pz0-f197.google.com (mail-pz0-f197.google.com [209.85.222.197]) by mx1.freebsd.org (Postfix) with ESMTP id D0C798FC16; Tue, 6 Apr 2010 17:34:14 +0000 (UTC) Received: by pzk35 with SMTP id 35so133494pzk.3 for ; Tue, 06 Apr 2010 10:34:14 -0700 (PDT) Received: by 10.115.102.16 with SMTP id e16mr6156927wam.117.1270575253959; Tue, 06 Apr 2010 10:34:13 -0700 (PDT) Received: from vpn177.ord02.your.org (vpn177.ord02.your.org [204.9.55.177]) by mx.google.com with ESMTPS id 20sm5623360iwn.1.2010.04.06.10.34.12 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 06 Apr 2010 10:34:13 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=us-ascii From: Kevin Day In-Reply-To: <20100310205711.GA1847@garage.freebsd.pl> Date: Tue, 6 Apr 2010 12:34:10 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: <31E0B354-ADD6-412B-9599-5E33A5E27853@dragondata.com> References: <7418ECC2-55C1-4A28-82EA-0972AFE745EF@dragondata.com> <20100310205711.GA1847@garage.freebsd.pl> To: Pawel Jakub Dawidek X-Mailer: Apple Mail (2.1077) Cc: freebsd-fs@freebsd.org Subject: Re: iscsi over HAST backed storage partial success X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 17:34:15 -0000 On Mar 10, 2010, at 2:57 PM, Pawel Jakub Dawidek wrote: > On Tue, Mar 09, 2010 at 05:03:41PM -0600, Kevin Day wrote: >>=20 >> I'm running istgt (iscsi target) using HAST backed storage. For the = most part, it seems to work really well. I have ucarp running to change = the IP that istgt is bound to, and modified the ucarp scripts to = start/stop istgt depending on which side is the master. If I shut down = the primary, the secondary takes over and all seems well. >>=20 >> However, if I reboot the secondary, the primary starts freezing up = for long periods: >>=20 >> Mar 9 22:46:27 cs04 hastd: [iscsi1] (primary) Unable to r: Socket is = not connected. >> Mar 9 22:46:27 cs04 hastd: [iscsi1] (primary) Unable to co: = Connection refused. >> Mar 9 22:46:42 cs04 last message repeated 3 times >> Mar 9 22:46:53 cs04 istgt[14298]: ABORT_TASK >> Mar 9 22:47:35 cs04 last message repeated 3 times >> Mar 9 22:48:02 cs04 hastd: [iscsi1] (primary) Unable to co: = Operation timed out. >> Mar 9 22:48:02 cs04 istgt[14298]: CmdSN(45748), OP=3D0x2a, = ElapsedTime=3D74 cleared=20 >> Mar 9 22:48:02 cs04 istgt[14298]: istgt_iscsi.c: = 640:istgt_iscsi_write_pdu: ***ERROR*** iscsi_write() failed (errno=3D32) >> Mar 9 22:48:02 cs04 istgt[14298]: = istgt_iscsi.c:3327:istgt_iscsi_op_task: ***ERROR*** iscsi_write_pdu() = failed >> Mar 9 22:48:02 cs04 istgt[14298]: = istgt_iscsi.c:3867:istgt_iscsi_execute: ***ERROR*** iscsi_op_task() = failed =20 >> Mar 9 22:48:02 cs04 istgt[14298]: istgt_iscsi.c:4337:worker: = ***ERROR*** iscsi_execute() failed >> Mar 9 22:48:02 cs04 istgt[14298]: CmdSN(490802), OP=3D0x2a, = ElapsedTime=3D73 cleared >> Mar 9 22:48:02 cs04 istgt[14298]: CmdSN(28387), OP=3D0x2a, = ElapsedTime=3D73 cleared=20 >> Mar 9 22:48:14 cs04 istgt[14298]: ABORT_TASK >> Mar 9 22:48:52 cs04 last message repeated 2 times >> Mar 9 22:49:22 cs04 hastd: [iscsi1] (primary) Unable to co: = Operation timed out. >>=20 >> As soon as the secondary comes back online, everything starts = behaving again and all is well. >=20 > Could you try the following patch? >=20 > http://people.freebsd.org/~pjd/patches/hastd_primary.c.patch >=20 Sorry for the long delay. This does seem to fix that problem, yes. :) -- Kevin From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 17:36:41 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 116D4106564A for ; Tue, 6 Apr 2010 17:36:41 +0000 (UTC) (envelope-from nekoexmachina@gmail.com) Received: from mail-fx0-f222.google.com (mail-fx0-f222.google.com [209.85.220.222]) by mx1.freebsd.org (Postfix) with ESMTP id 96F0B8FC1E for ; Tue, 6 Apr 2010 17:36:40 +0000 (UTC) Received: by fxm22 with SMTP id 22so137869fxm.14 for ; Tue, 06 Apr 2010 10:36:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:subject :message-id:references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=93vBWy1ZIj9NZ9wLgGDheP6odNEAWtBGTiFs2JThDe8=; b=fbFHl75raSPnQuyhPsOOiOgsrtlhWgvD7foMdAREsZeIyMUeUGz6GqM4V9ll2lrG0K ybF2jhyJ58vbNF4puWdex7XyCJYx7/oODIqrVUOf3SixA/r5c+aZqblIhH4wvMQNIVD6 y3j5LsR2CHc4X2cpqFLXCBDMkUaCvwxdDIi24= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=T/4+cu+wFOLKoL4TQa9hVpa42XwGyhwGtxCH/j6BoJSZVSZ4qnR1R2Dh4rTtbkrDw1 693oYGmT0Dc9sMNFN2qCoWMK2xcA4xfSfuNvuR8wiE7tW7xRjEiVZdLW88EnQyECqlIe GsbkzE5zg6toIR1XW4891JuIYH0TDSN3dtpfw= Received: by 10.223.5.69 with SMTP id 5mr7581131fau.8.1270575398295; Tue, 06 Apr 2010 10:36:38 -0700 (PDT) Received: from localhost ([188.134.12.208]) by mx.google.com with ESMTPS id 26sm15508068fks.52.2010.04.06.10.36.35 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 06 Apr 2010 10:36:35 -0700 (PDT) Date: Tue, 6 Apr 2010 21:34:59 +0400 From: Mikle Krutov To: freebsd-fs@freebsd.org Message-ID: <20100406173459.GA1285@takino.homeftp.org> References: <20100404191844.GA5071@takino.homeftp.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100404191844.GA5071@takino.homeftp.org> User-Agent: Mutt/1.5.20 (2009-06-14) Subject: Re: Strange ZFS performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 17:36:41 -0000 On Sun, Apr 04, 2010 at 11:18:45PM +0400, Mikle wrote: > Hello, list! > I've got some strange problem with one-disk zfs-pool: read/write performance for the files on the fs (dd if=/dev/zero of=/mountpoint/file bs=4M count=100) gives me only 2 MB/s, while reading from the disk (dd if=/dev/disk of=/dev/zero bs=4M count=100) gives me ~70MB/s. > pool is about 80% full; PC with the pool has 2GB of ram (1.5 of which is free); i've done no tuning in loader.conf and sysctl.conf for zfs. In dmesg there is no error-messages related to the disk (dmesg|grep ^ad12); s.m.a.r.t. seems OK. > Some time ago disk was OK, nothing in software/hardware has changed from that day. > Any ideas what could have happen to the disk? > > Wbr, Well, list, somehow after moving some important files (about 50GB) to other disk performance became OK. I did not get why it was so crappy, if anyone could give me any ideas - that would be great. Everyone, thanks for the replies. -- Wbr, Krutov Mikle From owner-freebsd-fs@FreeBSD.ORG Tue Apr 6 23:10:03 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6B22D106566B for ; Tue, 6 Apr 2010 23:10:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 591EC8FC1A for ; Tue, 6 Apr 2010 23:10:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o36NA34B039420 for ; Tue, 6 Apr 2010 23:10:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o36NA3oh039419; Tue, 6 Apr 2010 23:10:03 GMT (envelope-from gnats) Date: Tue, 6 Apr 2010 23:10:03 GMT Message-Id: <201004062310.o36NA3oh039419@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: kern/144330: commit references a PR X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2010 23:10:03 -0000 The following reply was made to PR kern/144330; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: kern/144330: commit references a PR Date: Tue, 6 Apr 2010 23:03:39 +0000 (UTC) Author: rmacklem Date: Tue Apr 6 23:03:20 2010 New Revision: 206288 URL: http://svn.freebsd.org/changeset/base/206288 Log: MFC: r205562 When the regular NFS server replied to a UDP client out of the replay cache, it did not free the request argument mbuf list, resulting in a leak. This patch fixes that leak. PR: kern/144330 Modified: stable/8/sys/rpc/svc.c Directory Properties: stable/8/sys/ (props changed) stable/8/sys/amd64/include/xen/ (props changed) stable/8/sys/cddl/contrib/opensolaris/ (props changed) stable/8/sys/contrib/dev/acpica/ (props changed) stable/8/sys/contrib/pf/ (props changed) stable/8/sys/dev/xen/xenpci/ (props changed) Modified: stable/8/sys/rpc/svc.c ============================================================================== --- stable/8/sys/rpc/svc.c Tue Apr 6 21:39:18 2010 (r206287) +++ stable/8/sys/rpc/svc.c Tue Apr 6 23:03:20 2010 (r206288) @@ -819,9 +819,11 @@ svc_getreq(SVCXPRT *xprt, struct svc_req free(r->rq_addr, M_SONAME); r->rq_addr = NULL; } + m_freem(args); goto call_done; default: + m_freem(args); goto call_done; } } _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Apr 7 07:53:29 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B76A11065673 for ; Wed, 7 Apr 2010 07:53:29 +0000 (UTC) (envelope-from spork@bway.net) Received: from xena.bway.net (xena.bway.net [216.220.96.26]) by mx1.freebsd.org (Postfix) with ESMTP id 5B8CD8FC1A for ; Wed, 7 Apr 2010 07:53:29 +0000 (UTC) Received: (qmail 38996 invoked by uid 0); 7 Apr 2010 07:26:47 -0000 Received: from unknown (HELO ?10.3.2.41?) (spork@96.57.144.66) by smtp.bway.net with (DHE-RSA-AES256-SHA encrypted) SMTP; 7 Apr 2010 07:26:47 -0000 Date: Wed, 7 Apr 2010 03:26:46 -0400 (EDT) From: Charles Sprickman X-X-Sender: spork@hotlap.local To: freebsd-fs@freebsd.org Message-ID: User-Agent: Alpine 2.00 (OSX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Subject: ZFS - best practices, alternate root X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 07:53:29 -0000 Howdy, I'm starting to roll out zfs in production and my primary method for doing any remote rescue work is to do a network boot that loads a mfsroot image with a number of extra tools (including zpool and zfs commands) and loader.conf options that load zfs.ko and opensolaris.ko. I've been documenting a number of the gotchas and little non-obvious things I've found when running a root on zfs setup. One place where I'm getting stuck is working with the zfs root pool when I'm booted off alternate media. For example, I do a network boot and a "zpool list" shows no pools, so I do a "zpool import -f zroot". Is this correct? When I'm done do I need to do an export and import cycle to get things ready for booting off the local zfs pool on reboot? The other little point of confusion is dealing with mounting the zfs root filesystem when in my netboot environment. Mounting say, just the root fs manually (ie: mount -t zfs zroot /mnt) works, but when I unmount it and do a "zfs list", I see that zfs now thinks "/mnt" is the new mountpoint. I've been digging around opensolaris docs, and I'm not seeing what the proper way is to "temporarily" alter a zfs mountpoint. I know that I can manually set it back to legacy root, but it's bad news if I forget that step. Lastly, say I've imported the pool, mounted root, altered something on the mounted zfs filesystem, unmounted it, set the mountpoint back to legacy root, what's the proper way to prep the pool to be ready for my next normal boot. Do I need to do the "zpool export/import" shuffle and copy the /boot/zfs/boot.cache back over in this situation? Thanks, Charles From owner-freebsd-fs@FreeBSD.ORG Wed Apr 7 13:20:03 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C3FAC106564A for ; Wed, 7 Apr 2010 13:20:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 984158FC13 for ; Wed, 7 Apr 2010 13:20:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o37DK3Hr020542 for ; Wed, 7 Apr 2010 13:20:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o37DK39q020541; Wed, 7 Apr 2010 13:20:03 GMT (envelope-from gnats) Date: Wed, 7 Apr 2010 13:20:03 GMT Message-Id: <201004071320.o37DK39q020541@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Dmitry Afanasiev Cc: Subject: Re: kern/145234: [zfs] zvol with org.freebsd:swap=on crashes zfs list X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Dmitry Afanasiev List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 13:20:03 -0000 The following reply was made to PR kern/145234; it has been noted by GNATS. From: Dmitry Afanasiev To: Andriy Gapon Cc: bug-followup@FreeBSD.org Subject: Re: kern/145234: [zfs] zvol with org.freebsd:swap=on crashes zfs list Date: Wed, 07 Apr 2010 17:16:32 +0400 On 06.04.2010 10:54, Andriy Gapon wrote: > This should be resolved in head now. Yes, after update "zfs list" is working properly. Please close PR. From owner-freebsd-fs@FreeBSD.ORG Wed Apr 7 18:30:29 2010 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C1B8D106566B for ; Wed, 7 Apr 2010 18:30:29 +0000 (UTC) (envelope-from scode@scode.org) Received: from mail-fx0-f225.google.com (mail-fx0-f225.google.com [209.85.220.225]) by mx1.freebsd.org (Postfix) with ESMTP id 43E918FC22 for ; Wed, 7 Apr 2010 18:30:28 +0000 (UTC) Received: by fxm25 with SMTP id 25so8573fxm.3 for ; Wed, 07 Apr 2010 11:30:28 -0700 (PDT) MIME-Version: 1.0 Sender: scode@scode.org Received: by 10.103.223.16 with HTTP; Wed, 7 Apr 2010 11:05:56 -0700 (PDT) X-Originating-IP: [213.114.159.69] Date: Wed, 7 Apr 2010 20:05:56 +0200 X-Google-Sender-Auth: d29b30948d989289 Received: by 10.103.84.1 with SMTP id m1mr3570783mul.26.1270663556217; Wed, 07 Apr 2010 11:05:56 -0700 (PDT) Message-ID: From: Peter Schuller To: freebsd-fs@FreeBSD.org Content-Type: text/plain; charset=UTF-8 Cc: rercola@acm.jhu.edu Subject: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 18:30:29 -0000 Hello, I found this PR: http://www.freebsd.org/cgi/query-pr.cgi?pr=145229 Part of the behavior matches mine, which is that I would see the ARC sit at around 200 MB in size and stay there - even though the minimum arc size was set to 800 MB. While I cannot say anything about 8.0-RC1 vs. 8.0 as talked about in the PR, I'm definitely seeing some unexpected ARC sizing. For various anecdotal reasons I thought that use of mmap() could be the triggering factor, so I wrote a small test program: http://distfiles.scode.org/mlref/alloc.c (compile with: gcc -g -std=c99 -o alloc alloc.c). It can be used to allocate a certain number of MB and fill it with data. Allocation by either malloc() or mmap(): ./alloc malloc 30000 ./alloc mmap 30000 # will write to an mkstemp generated file in pwd I have monitored memory use as reported by top, and the ARC size according to arcstats.size, while running this. Some observations (a timeline follows with points marked using "== Tn =="): * In general behavior is great after a fresh reboot; the ARC is at its maximum size (after filling it). * More generall, so far I have never seen the ARC shrink too much as long as the 'free' memory ('free' in top terms) is high. * After a fresh boot I can successfully force the ARC to shrink by allocating memory - malloc() or mmap(), both work. * After killing alloc, I can get the ARC to grow again by reading some large files. So it's not as simple as "it'll shrink but never grow", for both malloc() and mmap() based contention. * A day or so later the ARC had become a bit smaller - now at ~ 630 MB (ktorrent running in the background doing some reads, otherwise normal desktop). This is T1 on the timeline below. * By running alloc I was successfully able to make the ARC shrink a bit more, and then grow back again by reading data. This is T1-T2. Note how it fluctuates and seems to want to stay at around 600 MB even though I'm reading in data, and even though there was extra 'free' as a result of the alloc run. * Around T2 I re-ran alloc again and let it eat a significant amount of memory (4 GB). * Then I megan reading big files again. At first it upped a bit and landed on almost exactly 800, which is suspicously correlated with the 800 MB minimum arc size. * After further reading, it finally (T3) decides to grow further up to the maximum. In general my anecdotal feeling is that memory tends to end up in the active or inactive categories in top (instead of free), resulting in a shrunk ARC. By pushing whatever is in active/inactive away using the allocation program, I can then, upon killing it, convert that memory to 'free' again and allowing the ARC to size itself properly. I have not yet checked any code, I'm just anecdotally observing that the ARC sizing seems to be a function of mostly the 'free' memory. Related is that I only run ZFS on this machine, and it is unclear to me *what* it is that is being counted as inactive/active to any great extent (is there a good way to figure this out?). Worth mentioning is that "unexplained" inactive/active memory reminds me of an old thread I started: http://freebsd.monkey.org/freebsd-current/200709/msg00201.html That thread dealt mostly with a discrepancy in total memory use, which was apparantly a bug, but I distinctly remember the same behavior back then. I saw non-wired but active/inactive (but non-free) memory and I was never sure *what* it was, and the only way to make it go back to free was to do allocations. Back to today and ktorrent, note that stopping ktorrent (thus implying munmap()) does *not* instantly produce free memory. In the abscence of non-ZFS file systems that might cause caching to be visible as active/inactive, I'm not sure why I am seeing such memory accounted for. Time line follows; these are just dates and ARC sizes in MB resulting from a little for loop looking at arcstats.size: Wed Apr 7 19:36:17 CEST 2010: 627.361 Wed Apr 7 19:36:20 CEST 2010: 626.334 Wed Apr 7 19:36:23 CEST 2010: 626.196 Wed Apr 7 19:36:27 CEST 2010: 623.965 Wed Apr 7 19:36:30 CEST 2010: 626.42 Wed Apr 7 19:36:33 CEST 2010: 626.963 Wed Apr 7 19:36:36 CEST 2010: 626.947 Wed Apr 7 19:36:39 CEST 2010: 628.621 Wed Apr 7 19:36:43 CEST 2010: 628.937 Wed Apr 7 19:36:46 CEST 2010: 628.984 Wed Apr 7 19:36:49 CEST 2010: 629.015 == T1 == Wed Apr 7 19:36:52 CEST 2010: 406.518 Wed Apr 7 19:36:55 CEST 2010: 411.562 Wed Apr 7 19:36:59 CEST 2010: 414.874 Wed Apr 7 19:37:02 CEST 2010: 435.895 Wed Apr 7 19:37:05 CEST 2010: 517.359 Wed Apr 7 19:37:08 CEST 2010: 616.899 Wed Apr 7 19:37:11 CEST 2010: 718.44 Wed Apr 7 19:37:14 CEST 2010: 502.596 Wed Apr 7 19:37:17 CEST 2010: 587.716 Wed Apr 7 19:37:20 CEST 2010: 682.35 Wed Apr 7 19:37:23 CEST 2010: 541.922 Wed Apr 7 19:37:26 CEST 2010: 631.701 Wed Apr 7 19:37:29 CEST 2010: 529.565 Wed Apr 7 19:37:33 CEST 2010: 534.683 Wed Apr 7 19:37:36 CEST 2010: 539.814 Wed Apr 7 19:37:39 CEST 2010: 521.79 Wed Apr 7 19:37:42 CEST 2010: 517.15 Wed Apr 7 19:37:45 CEST 2010: 519.398 Wed Apr 7 19:37:48 CEST 2010: 524.562 Wed Apr 7 19:37:51 CEST 2010: 553.67 == T2 == Wed Apr 7 19:37:54 CEST 2010: 649.677 Wed Apr 7 19:37:57 CEST 2010: 737.425 Wed Apr 7 19:38:00 CEST 2010: 800.162 Wed Apr 7 19:38:03 CEST 2010: 800.324 Wed Apr 7 19:38:07 CEST 2010: 800.628 Wed Apr 7 19:38:10 CEST 2010: 801.115 Wed Apr 7 19:38:13 CEST 2010: 802.433 Wed Apr 7 19:38:17 CEST 2010: 800.622 Wed Apr 7 19:38:21 CEST 2010: 800.585 Wed Apr 7 19:38:24 CEST 2010: 800.913 Wed Apr 7 19:38:27 CEST 2010: 800.773 Wed Apr 7 19:38:31 CEST 2010: 800.905 Wed Apr 7 19:38:34 CEST 2010: 800.891 Wed Apr 7 19:38:37 CEST 2010: 800.592 Wed Apr 7 19:38:40 CEST 2010: 801.034 Wed Apr 7 19:38:44 CEST 2010: 800.535 == T3 == Wed Apr 7 19:38:47 CEST 2010: 877.722 Wed Apr 7 19:38:51 CEST 2010: 990.442 Wed Apr 7 19:38:54 CEST 2010: 1085.77 Wed Apr 7 19:38:57 CEST 2010: 1168.08 Wed Apr 7 19:39:00 CEST 2010: 1276.54 Wed Apr 7 19:39:03 CEST 2010: 1340.78 Wed Apr 7 19:39:06 CEST 2010: 1443.13 Wed Apr 7 19:39:09 CEST 2010: 1561.67 Wed Apr 7 19:39:12 CEST 2010: 1623.57 Wed Apr 7 19:39:15 CEST 2010: 1661.85 Wed Apr 7 19:39:18 CEST 2010: 1663.32 Wed Apr 7 19:39:21 CEST 2010: 1670.45 Wed Apr 7 19:39:24 CEST 2010: 1671.15 Wed Apr 7 19:39:28 CEST 2010: 1674.46 Wed Apr 7 19:39:31 CEST 2010: 1678.91 Wed Apr 7 19:39:34 CEST 2010: 1681.35 Wed Apr 7 19:39:37 CEST 2010: 1683.25 Wed Apr 7 19:39:40 CEST 2010: 1685.97 Wed Apr 7 19:39:43 CEST 2010: 1688.21 Wed Apr 7 19:39:46 CEST 2010: 1696.55 Wed Apr 7 19:39:49 CEST 2010: 1698.85 Wed Apr 7 19:39:52 CEST 2010: 1700 Wed Apr 7 19:39:55 CEST 2010: 1700.01 Wed Apr 7 19:39:58 CEST 2010: 1700.02 Wed Apr 7 19:40:01 CEST 2010: 1700.03 -- / Peter Schuller From owner-freebsd-fs@FreeBSD.ORG Wed Apr 7 18:46:40 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 27B5C1065673 for ; Wed, 7 Apr 2010 18:46:40 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-pw0-f54.google.com (mail-pw0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id EFF9E8FC1A for ; Wed, 7 Apr 2010 18:46:39 +0000 (UTC) Received: by pwi9 with SMTP id 9so1386539pwi.13 for ; Wed, 07 Apr 2010 11:46:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:received:message-id:subject :from:to:cc:content-type; bh=bekRJ466Znl8Xsia05ITNxLjGtCViww2/UZ/GUnywVw=; b=GOAyTZptnUn1JFSVVBjGX9FXitp065NnDeQzz8PPqdOvTCzNtAbjkdbnV0yezdIe5P bZTE/w+nTGK+jCcZeTJnRmwxmsuq8Tlasr9Gm3eoRyQsu+M8Ub4Snjv96ud0uP9gNDa2 2g1sD+2NOlZs95bJwok7k9Li5SjyAVyePEI7Y= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=jz5r1N3MpylJ+KuDMdz+0KSigsBJTkB3BDj19mq/bEtdNZ+TyeL/qn0TOjg4WFgtli gHyCghBUDy0ZtumyhNWqTixHom2qvid2KOflgOL5Gat5hZuUaw9vQ1JDIdVq5WDXJ7T3 VLJ+TxqdF1XDX/Qf9tjwxox3S8kATzvJezlPk= MIME-Version: 1.0 Sender: rincebrain@gmail.com Received: by 10.231.60.197 with HTTP; Wed, 7 Apr 2010 11:46:39 -0700 (PDT) In-Reply-To: References: Date: Wed, 7 Apr 2010 14:46:39 -0400 X-Google-Sender-Auth: ef541872665b5988 Received: by 10.114.187.29 with SMTP id k29mr281508waf.208.1270665999504; Wed, 07 Apr 2010 11:46:39 -0700 (PDT) Message-ID: From: Rich To: Peter Schuller Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 18:46:40 -0000 A datapoint for you: Now running 8-STABLE (plus the mbuf leak fix which went in recently), here's my ARC stats and ARC sysctl settings after the server was up for about a week (5 days) after that: ARC Size: Current Size: 587.49M (arcsize) Target Size: (Adaptive) 587.63M (c) Min Size (Hard Limit): 512.00M (arc_min) Max Size (Hard Limit): 3072.00M (arc_max) ARC Size Breakdown: Recently Used Cache Size: 98.28% 577.50M (p) Frequently Used Cache Size: 1.72% 10.12M (c-p) ARC Efficiency: Cache Access Total: 2602789964 Cache Hit Ratio: 96.11% 2501461882 Cache Miss Ratio: 3.89% 101328082 Actual Hit Ratio: 87.65% 2281380527 and vfs.zfs.arc_meta_limit=1073741824 vfs.zfs.arc_meta_used=548265792 vfs.zfs.arc_min=536870912 vfs.zfs.arc_max=3221225472 So it very clearly limits to near the minimum size, but whether this is design or accidental behavior, I'm unsure. From owner-freebsd-fs@FreeBSD.ORG Wed Apr 7 22:03:32 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 07E671065672; Wed, 7 Apr 2010 22:03:32 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D15888FC21; Wed, 7 Apr 2010 22:03:31 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o37M3Vj5075405; Wed, 7 Apr 2010 22:03:31 GMT (envelope-from avg@freefall.freebsd.org) Received: (from avg@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o37M3Vno075401; Wed, 7 Apr 2010 22:03:31 GMT (envelope-from avg) Date: Wed, 7 Apr 2010 22:03:31 GMT Message-Id: <201004072203.o37M3Vno075401@freefall.freebsd.org> To: KOT@MATPOCKuH.Ru, avg@FreeBSD.org, freebsd-fs@FreeBSD.org From: avg@FreeBSD.org Cc: Subject: Re: kern/145234: [zfs] zvol with org.freebsd:swap=on crashes zfs list X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 22:03:32 -0000 Synopsis: [zfs] zvol with org.freebsd:swap=on crashes zfs list State-Changed-From-To: open->closed State-Changed-By: avg State-Changed-When: Wed Apr 7 21:59:08 UTC 2010 State-Changed-Why: The issue is resolved in head (the only branch where the problem existed). http://www.freebsd.org/cgi/query-pr.cgi?pr=145234 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 7 23:43:24 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5E880106566C for ; Wed, 7 Apr 2010 23:43:24 +0000 (UTC) (envelope-from nowak@funil.de) Received: from mail.csp-systems.de (mail.csp-systems.de [83.246.83.230]) by mx1.freebsd.org (Postfix) with ESMTP id 15F9B8FC13 for ; Wed, 7 Apr 2010 23:43:23 +0000 (UTC) Received: by mail.csp-systems.de (Postfix, from userid 602) id B96666928CB; Thu, 8 Apr 2010 01:20:52 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail.csp-systems.de X-Spam-Level: * X-Spam-Status: No, score=1.3 required=6.0 tests=AWL, BAYES_00, FH_DATE_PAST_20XX, RDNS_NONE autolearn=no version=3.2.5 Received: from master.sinuspl.net (unknown [83.246.67.202]) by mail.csp-systems.de (Postfix) with ESMTP id 140C66901F7; Thu, 8 Apr 2010 01:20:48 +0200 (CEST) Received: by master.sinuspl.net (Postfix, from userid 8) id A6BAC108033F; Thu, 8 Apr 2010 01:20:46 +0200 (CEST) Received: from [172.19.191.2] (088156221125.bialystok.vectranet.pl [88.156.221.125]) by master.sinuspl.net (Postfix) with ESMTPA id F30FD10802D7; Thu, 8 Apr 2010 01:20:45 +0200 (CEST) Message-ID: <4BBD1345.9020503@funil.de> Date: Thu, 08 Apr 2010 01:20:37 +0200 From: Adam Nowacki User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Rich References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2010 23:43:24 -0000 check kstat.zfs.misc.arcstats.memory_throttle_count This counter is increased every time zfs thinks system is running low on memory and will force a write flush and reduce arc size to minimum. Biggest problem is that the code is counting only free memory and completely ignoring other memory that can be immediately freed like cached files from ufs. This is very easy to trigger on mixed ufs and zfs system by just reading enough data from ufs to fill its cache, zfs will begin throttling and will continue doing so even with no further ufs reads or writes. Rich wrote: > A datapoint for you: > Now running 8-STABLE (plus the mbuf leak fix which went in recently), > here's my ARC stats and ARC sysctl settings after the server was up > for about a week (5 days) after that: > ARC Size: > Current Size: 587.49M (arcsize) > Target Size: (Adaptive) 587.63M (c) > Min Size (Hard Limit): 512.00M (arc_min) > Max Size (Hard Limit): 3072.00M (arc_max) > > ARC Size Breakdown: > Recently Used Cache Size: 98.28% 577.50M (p) > Frequently Used Cache Size: 1.72% 10.12M (c-p) > > ARC Efficiency: > Cache Access Total: 2602789964 > Cache Hit Ratio: 96.11% 2501461882 > Cache Miss Ratio: 3.89% 101328082 > Actual Hit Ratio: 87.65% 2281380527 > > and > > vfs.zfs.arc_meta_limit=1073741824 > vfs.zfs.arc_meta_used=548265792 > vfs.zfs.arc_min=536870912 > vfs.zfs.arc_max=3221225472 > > So it very clearly limits to near the minimum size, but whether this > is design or accidental behavior, I'm unsure. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Thu Apr 8 00:00:04 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 114061065673 for ; Thu, 8 Apr 2010 00:00:04 +0000 (UTC) (envelope-from nowak@xpam.de) Received: from mail.csp-systems.de (mail.csp-systems.de [83.246.83.230]) by mx1.freebsd.org (Postfix) with ESMTP id B66CC8FC16 for ; Thu, 8 Apr 2010 00:00:03 +0000 (UTC) Received: by mail.csp-systems.de (Postfix, from userid 602) id F0FF969330A; Thu, 8 Apr 2010 01:32:06 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail.csp-systems.de X-Spam-Level: ***** X-Spam-Status: No, score=5.7 required=6.0 tests=AWL, BAYES_00, FH_DATE_PAST_20XX, RDNS_NONE autolearn=no version=3.2.5 Received: from master.sinuspl.net (unknown [83.246.67.202]) by mail.csp-systems.de (Postfix) with ESMTP id 411DD68F145 for ; Thu, 8 Apr 2010 01:32:02 +0200 (CEST) Received: by master.sinuspl.net (Postfix, from userid 8) id 74693108033F; Thu, 8 Apr 2010 01:32:00 +0200 (CEST) Received: from [172.19.191.2] (088156221125.bialystok.vectranet.pl [88.156.221.125]) by master.sinuspl.net (Postfix) with ESMTPA id D3A4010802D7 for ; Thu, 8 Apr 2010 01:31:59 +0200 (CEST) Message-ID: <4BBD15E7.5010006@xpam.de> Date: Thu, 08 Apr 2010 01:31:51 +0200 From: Adam Nowacki User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 00:00:04 -0000 check kstat.zfs.misc.arcstats.memory_throttle_count This counter is increased every time zfs thinks system is running low on memory and will force a write flush and reduce arc size to minimum. Biggest problem is that the code is counting only free memory and completely ignoring other memory that can be immediately freed like cached files from ufs. This is very easy to trigger on mixed ufs and zfs system by just reading enough data from ufs to fill its cache, zfs will begin throttling and will continue doing so even with no further ufs reads or writes. Rich wrote: > A datapoint for you: > Now running 8-STABLE (plus the mbuf leak fix which went in recently), > here's my ARC stats and ARC sysctl settings after the server was up > for about a week (5 days) after that: > ARC Size: > Current Size: 587.49M (arcsize) > Target Size: (Adaptive) 587.63M (c) > Min Size (Hard Limit): 512.00M (arc_min) > Max Size (Hard Limit): 3072.00M (arc_max) > > ARC Size Breakdown: > Recently Used Cache Size: 98.28% 577.50M (p) > Frequently Used Cache Size: 1.72% 10.12M (c-p) > > ARC Efficiency: > Cache Access Total: 2602789964 > Cache Hit Ratio: 96.11% 2501461882 > Cache Miss Ratio: 3.89% 101328082 > Actual Hit Ratio: 87.65% 2281380527 > > and > > vfs.zfs.arc_meta_limit=1073741824 > vfs.zfs.arc_meta_used=548265792 > vfs.zfs.arc_min=536870912 > vfs.zfs.arc_max=3221225472 > > So it very clearly limits to near the minimum size, but whether this > is design or accidental behavior, I'm unsure. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@FreeBSD.ORG Thu Apr 8 03:25:58 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A168E1065741 for ; Thu, 8 Apr 2010 03:25:58 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-yw0-f171.google.com (mail-yw0-f171.google.com [209.85.211.171]) by mx1.freebsd.org (Postfix) with ESMTP id 5B1F18FC23 for ; Thu, 8 Apr 2010 03:25:58 +0000 (UTC) Received: by ywh1 with SMTP id 1so895217ywh.3 for ; Wed, 07 Apr 2010 20:25:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=EQ5ctBqxU0MtbnIFGeuJ9CFHNjk+ThBaVwia+mbUD+8=; b=hcGZP0tIfUAF2YVtruBiSVp1qARpoUyFgdr/NVepf2madEnQxA9r/uCFeb7DFG9GOS k0zlxFGyMyHHLL67h9N9+w5IRBLCf3a2EqwMGf4b4zk+LK5aTADNtK6LuP1su4XFa5ce pZIq9t4Dy9sCf38ifzXodELKrFlvCHG0ytKeM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=daTpCMJgYVzcb1n2LnefBP13lwh3ImWnh8LV8YGQAxs3mjhy/QqibghWcVM9nvI0qG HXJLrdSRkGGX7kFshloyuUieLRQOjRWnN7t5rRPpXdHconj7kyk+L4iJTzE8pfVem6/K 7ep7NoW4F7CKb3iArx68ys2FOVo6rZNbqQLDo= MIME-Version: 1.0 Received: by 10.231.60.197 with HTTP; Wed, 7 Apr 2010 20:25:57 -0700 (PDT) In-Reply-To: <4BBD15E7.5010006@xpam.de> References: <4BBD15E7.5010006@xpam.de> Date: Wed, 7 Apr 2010 23:25:57 -0400 Received: by 10.150.66.15 with SMTP id o15mr9807330yba.74.1270697157368; Wed, 07 Apr 2010 20:25:57 -0700 (PDT) Message-ID: From: Rich To: Adam Nowacki Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 03:25:58 -0000 kstat.zfs.misc.arcstats.memory_throttle_count: 673016 Since UFS has no files of any reasonable size on it (it's literally just rootFS)... - Rich On Wed, Apr 7, 2010 at 7:31 PM, Adam Nowacki wrote: > check kstat.zfs.misc.arcstats.memory_throttle_count > This counter is increased every time zfs thinks system is running low on > memory and will force a write flush and reduce arc size to minimum. Bigge= st > problem is that the code is counting only free memory and completely > ignoring other memory that can be immediately freed like cached files fro= m > ufs. This is very easy to trigger on mixed ufs and zfs system by just > reading enough data from ufs to fill its cache, zfs will begin throttling > and will continue doing so even with no further ufs reads or writes. > > Rich wrote: >> >> A datapoint for you: >> Now running 8-STABLE (plus the mbuf leak fix which went in recently), >> here's my ARC stats and ARC sysctl settings after the server was up >> for about a week (5 days) after that: >> ARC Size: >> =A0 =A0 =A0 =A0Current Size: =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 587.49M (arcsize) >> =A0 =A0 =A0 =A0Target Size: (Adaptive) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5= 87.63M (c) >> =A0 =A0 =A0 =A0Min Size (Hard Limit): =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0512.00M (arc_min) >> =A0 =A0 =A0 =A0Max Size (Hard Limit): =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A03072.00M (arc_max) >> >> ARC Size Breakdown: >> =A0 =A0 =A0 =A0Recently Used Cache Size: =A0 =A0 =A0 98.28% =A0577.50M (= p) >> =A0 =A0 =A0 =A0Frequently Used Cache Size: =A0 =A0 1.72% =A0 10.12M (c-p= ) >> >> ARC Efficiency: >> =A0 =A0 =A0 =A0Cache Access Total: =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 2602789964 >> =A0 =A0 =A0 =A0Cache Hit Ratio: =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A096.11% = =A02501461882 >> =A0 =A0 =A0 =A0Cache Miss Ratio: =A0 =A0 =A0 =A0 =A0 =A0 =A0 3.89% =A0 1= 01328082 >> =A0 =A0 =A0 =A0Actual Hit Ratio: =A0 =A0 =A0 =A0 =A0 =A0 =A0 87.65% =A02= 281380527 >> >> and >> >> =A0 =A0 =A0 =A0vfs.zfs.arc_meta_limit=3D1073741824 >> =A0 =A0 =A0 =A0vfs.zfs.arc_meta_used=3D548265792 >> =A0 =A0 =A0 =A0vfs.zfs.arc_min=3D536870912 >> =A0 =A0 =A0 =A0vfs.zfs.arc_max=3D3221225472 >> >> So it very clearly limits to near the minimum size, but whether this >> is design or accidental behavior, I'm unsure. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Apr 8 03:37:45 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A1C2D106566B for ; Thu, 8 Apr 2010 03:37:45 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta05.emeryville.ca.mail.comcast.net (qmta05.emeryville.ca.mail.comcast.net [76.96.30.48]) by mx1.freebsd.org (Postfix) with ESMTP id 860DF8FC12 for ; Thu, 8 Apr 2010 03:37:45 +0000 (UTC) Received: from omta18.emeryville.ca.mail.comcast.net ([76.96.30.74]) by qmta05.emeryville.ca.mail.comcast.net with comcast id 2rbL1e0051bwxycA5rdmGU; Thu, 08 Apr 2010 03:37:46 +0000 Received: from koitsu.dyndns.org ([98.248.46.159]) by omta18.emeryville.ca.mail.comcast.net with comcast id 2rh11e0013S48mS8erh5M3; Thu, 08 Apr 2010 03:41:13 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 166449B419; Wed, 7 Apr 2010 20:36:30 -0700 (PDT) Date: Wed, 7 Apr 2010 20:36:30 -0700 From: Jeremy Chadwick To: Rich Message-ID: <20100408033630.GA69748@icarus.home.lan> References: <4BBD15E7.5010006@xpam.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 03:37:45 -0000 On Wed, Apr 07, 2010 at 11:25:57PM -0400, Rich wrote: > kstat.zfs.misc.arcstats.memory_throttle_count: 673016 > > Since UFS has no files of any reasonable size on it (it's literally > just rootFS)... > > ... > > > Rich wrote: > >> > >>        vfs.zfs.arc_meta_limit=1073741824 > >>        vfs.zfs.arc_meta_used=548265792 > >>        vfs.zfs.arc_min=536870912 > >>        vfs.zfs.arc_max=3221225472 > >> > >> So it very clearly limits to near the minimum size, but whether this > >> is design or accidental behavior, I'm unsure. I don't see a vm.kmem_size entry in there, so I can't see how you're going to reach arc_max ever. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Thu Apr 8 03:44:32 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C81621065677 for ; Thu, 8 Apr 2010 03:44:32 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-gx0-f211.google.com (mail-gx0-f211.google.com [209.85.217.211]) by mx1.freebsd.org (Postfix) with ESMTP id 7F20D8FC14 for ; Thu, 8 Apr 2010 03:44:32 +0000 (UTC) Received: by gxk3 with SMTP id 3so931131gxk.13 for ; Wed, 07 Apr 2010 20:44:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=VdvCyBiUEGCZGKd4u6oNAW5BKYHmFJmTfjMtjGP0eMI=; b=v8C9Pf3G1qzBNHSYw53NYnMF4WRC+lnTt1OQkzWLorJQlZhv8f0kjSwupG7Bsxt/yK FLyAp8fclrn4LCNxONYnVhrxqIkVlONpFfkDeYSEqN3UTzn4dtnTxhK0JqSORYvK0B4/ gQjckbCoFJ1GrJS7/ktOm7xFZpstXi9MV/dd8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=ELOIJ/GfJVucWJEz0pVJ8qItiYkUM1aTVysq7HAPXLuLbif3Vauw5SS56IQoed3qq3 hf3fMaxOID6Dvu1GE3uedoXpDMEvJYPL6c1GcYsCQoDQ0gtLxsCK5GLLD6PVeXgV93sH R5YmCEx5aGWFdki7GS3NpujtpR8I/5dx6IT1w= MIME-Version: 1.0 Received: by 10.231.60.197 with HTTP; Wed, 7 Apr 2010 20:44:31 -0700 (PDT) In-Reply-To: <20100408033630.GA69748@icarus.home.lan> References: <4BBD15E7.5010006@xpam.de> <20100408033630.GA69748@icarus.home.lan> Date: Wed, 7 Apr 2010 23:44:31 -0400 Received: by 10.150.194.2 with SMTP id r2mr2103126ybf.92.1270698271712; Wed, 07 Apr 2010 20:44:31 -0700 (PDT) Message-ID: From: Rich To: Jeremy Chadwick Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 03:44:32 -0000 vm.kmem_size_scale: 3 vm.kmem_size_max: 329853485875 vm.kmem_size_min: 0 vm.kmem_size: 5033164800 - Rich On Wed, Apr 7, 2010 at 11:36 PM, Jeremy Chadwick wrote: > On Wed, Apr 07, 2010 at 11:25:57PM -0400, Rich wrote: >> kstat.zfs.misc.arcstats.memory_throttle_count: 673016 >> >> Since UFS has no files of any reasonable size on it (it's literally >> just rootFS)... >> >> ... >> >> > Rich wrote: >> >> >> >> =A0 =A0 =A0 =A0vfs.zfs.arc_meta_limit=3D1073741824 >> >> =A0 =A0 =A0 =A0vfs.zfs.arc_meta_used=3D548265792 >> >> =A0 =A0 =A0 =A0vfs.zfs.arc_min=3D536870912 >> >> =A0 =A0 =A0 =A0vfs.zfs.arc_max=3D3221225472 >> >> >> >> So it very clearly limits to near the minimum size, but whether this >> >> is design or accidental behavior, I'm unsure. > > I don't see a vm.kmem_size entry in there, so I can't see how you're > going to reach arc_max ever. > > -- > | Jeremy Chadwick =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 jdc@parodius.com | > | Parodius Networking =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http://= www.parodius.com/ | > | UNIX Systems Administrator =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Mountain = View, CA, USA | > | Making life hard for others since 1977. =A0 =A0 =A0 =A0 =A0 =A0 =A0PGP:= 4BD6C0CB | > > From owner-freebsd-fs@FreeBSD.ORG Thu Apr 8 06:02:28 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9C56C1065675 for ; Thu, 8 Apr 2010 06:02:28 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 709528FC25 for ; Thu, 8 Apr 2010 06:02:28 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o3862Shm098764 for ; Thu, 8 Apr 2010 06:02:28 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o3862SoH098763; Thu, 8 Apr 2010 06:02:28 GMT (envelope-from gnats) Date: Thu, 8 Apr 2010 06:02:28 GMT Message-Id: <201004080602.o3862SoH098763@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: "Andrei V. Lavreniyuk" Cc: Subject: Re: kern/145424: [zfs] [patch] move source closer to v15 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: "Andrei V. Lavreniyuk" List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 06:02:28 -0000 The following reply was made to PR kern/145424; it has been noted by GNATS. From: "Andrei V. Lavreniyuk" To: bug-followup@FreeBSD.org, mm@FreeBSD.org Cc: Subject: Re: kern/145424: [zfs] [patch] move source closer to v15 Date: Thu, 08 Apr 2010 08:23:12 +0300 Hi! My test system 1 + all patch kern/145424 (http://mfsbsd.vx.sk/zfs/stable-zfs-combined.patch) : FreeBSD datacenter.technica-03.local 8.0-STABLE FreeBSD 8.0-STABLE #0: Tue Apr 6 20:37:49 EEST 2010 root@datacenter.technica-03.local:/usr/obj/usr/src/sys/SMP64 amd64 # zpool status -v pool: zdata state: ONLINE scrub: scrub completed after 0h59m with 0 errors on Tue Apr 6 22:30:12 2010 config: NAME STATE READ WRITE CKSUM zdata ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 gpt/disk3 ONLINE 0 0 0 gpt/disk4 ONLINE 0 0 0 gpt/disk5 ONLINE 0 0 0 logs errors: No known data errors My test system 2 + all patch kern/145424 (http://mfsbsd.vx.sk/zfs/stable-zfs-combined.patch) : FreeBSD opensolaris.technica-03.local 8.0-STABLE FreeBSD 8.0-STABLE #0: Tue Apr 6 17:41:54 UTC 2010 root@opensolaris.technica-03.local:/usr/obj/usr/src/sys/SMP64R amd64 # zpool status -v pool: zsolaris state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zsolaris ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 gpt/disk3 ONLINE 0 0 0 logs spares gpt/disk0 AVAIL errors: No known data errors ZFS work fine. Thanks! -- Best regards, Andrei V. Lavreniyuk. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 8 13:55:37 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 25AC71065673 for ; Thu, 8 Apr 2010 13:55:37 +0000 (UTC) (envelope-from me@lexasoft.ru) Received: from relay.wahome.ru (relay.wahome.ru [95.211.21.141]) by mx1.freebsd.org (Postfix) with ESMTP id E102A8FC16 for ; Thu, 8 Apr 2010 13:55:36 +0000 (UTC) Received: from mmx.lexasoft.ru (mmx.lexasoft.ru [92.241.160.6]) by relay.wahome.ru (Postfix) with ESMTP id 0F9066B21EA for ; Thu, 8 Apr 2010 17:52:39 +0400 (MSD) Received: from [10.100.0.2] (petrovich-telecom-gw.wahome.ru [77.91.225.38]) by mmx.lexasoft.ru (Postfix) with ESMTPSA id 06D0128491; Thu, 8 Apr 2010 17:55:34 +0400 (MSD) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=utf-8 From: Alexey Tarasov In-Reply-To: Date: Thu, 8 Apr 2010 17:55:33 +0400 Content-Transfer-Encoding: quoted-printable Message-Id: <46D5776E-39F0-48A7-B1C0-B844BF5147C5@lexasoft.ru> References: <201003291616.27838.Pascal.Stumpf@cubes.de> <9D752CC7-5CCA-454D-8BEC-F3D5E6F8445C@lexasoft.ru> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Cc: Subject: Re: ZFS raidz and 4k sector disks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 13:55:37 -0000 Hello. I've tried all methods and realized that unfortunately the only working = method is gnop. So you can't use these disks for ZFS at all now. On 29.03.2010, at 19:18, Ivan Voras wrote: > Another possible solution is gnop, which I think somebody already > mentioned. It too can create sector sizes of multiple base sector = size. -- Alexey Tarasov (\__/)=20 (=3D'.'=3D)=20 E[: | | | | :]=D0=97=20 (")_(") From owner-freebsd-fs@FreeBSD.ORG Thu Apr 8 20:16:55 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4F8141065670 for ; Thu, 8 Apr 2010 20:16:55 +0000 (UTC) (envelope-from scode@scode.org) Received: from mail-ww0-f54.google.com (mail-ww0-f54.google.com [74.125.82.54]) by mx1.freebsd.org (Postfix) with ESMTP id E4ADA8FC19 for ; Thu, 8 Apr 2010 20:16:53 +0000 (UTC) Received: by wwb24 with SMTP id 24so524148wwb.13 for ; Thu, 08 Apr 2010 13:16:52 -0700 (PDT) MIME-Version: 1.0 Sender: scode@scode.org Received: by 10.216.50.11 with HTTP; Thu, 8 Apr 2010 13:16:52 -0700 (PDT) X-Originating-IP: [213.114.159.69] In-Reply-To: <4BBD15E7.5010006@xpam.de> References: <4BBD15E7.5010006@xpam.de> Date: Thu, 8 Apr 2010 22:16:52 +0200 X-Google-Sender-Auth: b9b61a64a6d7ca6d Received: by 10.216.85.143 with SMTP id u15mr300991wee.205.1270757812635; Thu, 08 Apr 2010 13:16:52 -0700 (PDT) Message-ID: From: Peter Schuller To: Adam Nowacki Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Apr 2010 20:16:55 -0000 > check kstat.zfs.misc.arcstats.memory_throttle_count > This counter is increased every time zfs thinks system is running low on > memory and will force a write flush and reduce arc size to minimum. Biggest It doesn't always seem to do so. I did (only) one test, and here is some output from another little loop while I was dd:ing to a file at maximum speed: Thu Apr 8 22:13:38 CEST 2010: 1229.71 MB, 11 throttles Thu Apr 8 22:13:39 CEST 2010: 1231.92 MB, 11 throttles Thu Apr 8 22:13:40 CEST 2010: 1233.26 MB, 11 throttles Thu Apr 8 22:13:42 CEST 2010: 1235.36 MB, 11 throttles Thu Apr 8 22:13:43 CEST 2010: 1237.58 MB, 11 throttles Thu Apr 8 22:13:44 CEST 2010: 1237.83 MB, 11 throttles Thu Apr 8 22:13:45 CEST 2010: 1237.83 MB, 11 throttles Thu Apr 8 22:13:46 CEST 2010: 1239.94 MB, 11 throttles Thu Apr 8 22:13:47 CEST 2010: 1242.38 MB, 11 throttles Thu Apr 8 22:13:48 CEST 2010: 1264.49 MB, 11 throttles Thu Apr 8 22:13:50 CEST 2010: 1204.95 MB, 11 throttles Thu Apr 8 22:13:51 CEST 2010: 1228.51 MB, 14 throttles Thu Apr 8 22:13:52 CEST 2010: 1228.64 MB, 14 throttles Thu Apr 8 22:13:53 CEST 2010: 1228.81 MB, 14 throttles Thu Apr 8 22:13:54 CEST 2010: 1228.81 MB, 14 throttles Thu Apr 8 22:13:55 CEST 2010: 1228.81 MB, 14 throttles Thu Apr 8 22:13:56 CEST 2010: 1228.81 MB, 14 throttles Thu Apr 8 22:13:58 CEST 2010: 1228.74 MB, 14 throttles Thu Apr 8 22:13:59 CEST 2010: 1228.89 MB, 14 throttles Thu Apr 8 22:14:00 CEST 2010: 1228.91 MB, 14 throttles Thu Apr 8 22:14:03 CEST 2010: 1229.05 MB, 17 throttles Thu Apr 8 22:14:07 CEST 2010: 1228.96 MB, 19 throttles Thu Apr 8 22:14:16 CEST 2010: 1228.93 MB, 22 throttles Thu Apr 8 22:14:18 CEST 2010: 1229.19 MB, 22 throttles Thu Apr 8 22:14:19 CEST 2010: 1229.09 MB, 26 throttles Thu Apr 8 22:14:25 CEST 2010: 1229.04 MB, 26 throttles Thu Apr 8 22:14:30 CEST 2010: 1229.08 MB, 29 throttles Thu Apr 8 22:14:32 CEST 2010: 1230.13 MB, 29 throttles Thu Apr 8 22:14:36 CEST 2010: 1229.11 MB, 32 throttles Thu Apr 8 22:14:40 CEST 2010: 1229.34 MB, 32 throttles Thu Apr 8 22:14:44 CEST 2010: 1229.25 MB, 35 throttles Thu Apr 8 22:14:46 CEST 2010: 1229.28 MB, 35 throttles Thu Apr 8 22:14:50 CEST 2010: 1236.55 MB, 38 throttles Thu Apr 8 22:14:52 CEST 2010: 1238.9 MB, 38 throttles Thu Apr 8 22:14:53 CEST 2010: 1241 MB, 38 throttles -- / Peter Schuller From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 05:50:03 2010 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 887111065672 for ; Fri, 9 Apr 2010 05:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 5DB1F8FC15 for ; Fri, 9 Apr 2010 05:50:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o395o3Cx053299 for ; Fri, 9 Apr 2010 05:50:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o395o3YZ053297; Fri, 9 Apr 2010 05:50:03 GMT (envelope-from gnats) Date: Fri, 9 Apr 2010 05:50:03 GMT Message-Id: <201004090550.o395o3YZ053297@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: Martin Matuska Cc: Subject: Re: kern/145424: [zfs] [patch] move source closer to v15 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Martin Matuska List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 05:50:03 -0000 The following reply was made to PR kern/145424; it has been noted by GNATS. From: Martin Matuska To: bug-followup@FreeBSD.org, mm@FreeBSD.org Cc: Subject: Re: kern/145424: [zfs] [patch] move source closer to v15 Date: Fri, 09 Apr 2010 07:34:38 +0200 The main patch (head-zfs-v1.patch, stable-zfs-v1.patch) has 2 serious bugs: a) after 7837 "zfs send" causes a kernel panic b) after 8241 "zpool export" and "zpool destroy" always ends with pool being busy From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 08:28:38 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 698971065673 for ; Fri, 9 Apr 2010 08:28:38 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id AC57F8FC29 for ; Fri, 9 Apr 2010 08:28:37 +0000 (UTC) Received: from porto.topspin.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA13978; Fri, 09 Apr 2010 11:28:35 +0300 (EEST) (envelope-from avg@icyb.net.ua) Received: from localhost.topspin.kiev.ua ([127.0.0.1]) by porto.topspin.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1O09Zr-0004gQ-H5; Fri, 09 Apr 2010 11:28:35 +0300 Message-ID: <4BBEE533.2050507@icyb.net.ua> Date: Fri, 09 Apr 2010 11:28:35 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.24 (X11/20100321) MIME-Version: 1.0 To: Alexey Tarasov References: <201003291616.27838.Pascal.Stumpf@cubes.de> <9D752CC7-5CCA-454D-8BEC-F3D5E6F8445C@lexasoft.ru> <46D5776E-39F0-48A7-B1C0-B844BF5147C5@lexasoft.ru> In-Reply-To: <46D5776E-39F0-48A7-B1C0-B844BF5147C5@lexasoft.ru> X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS raidz and 4k sector disks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 08:28:38 -0000 on 08/04/2010 16:55 Alexey Tarasov said the following: > Hello. > > I've tried all methods and realized that unfortunately the only working > method is gnop. So you can't use these disks for ZFS at all now. Why? And what are you actually trying to do? My understanding was that even with 512-byte sectors ZFS still aligns its on-disk data with > 4K alignment. Do you see otherwise? What problem do you have? -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 08:39:36 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 271A8106576B for ; Fri, 9 Apr 2010 08:39:35 +0000 (UTC) (envelope-from me@janh.de) Received: from mailhost.uni-hamburg.de (mailhost.uni-hamburg.de [134.100.32.155]) by mx1.freebsd.org (Postfix) with ESMTP id 2BC568FC20 for ; Fri, 9 Apr 2010 08:39:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mailhost.uni-hamburg.de (Postfix) with ESMTP id B830A90377; Fri, 9 Apr 2010 10:23:07 +0200 (CEST) X-Virus-Scanned: by University of Hamburg (RRZ/mailhost) Received: from mailhost.uni-hamburg.de ([127.0.0.1]) by localhost (mailhost.uni-hamburg.de [127.0.0.1]) (amavisd-new, port 10024) with LMTP id D6HpGib0Vbce; Fri, 9 Apr 2010 10:23:07 +0200 (CEST) Received: from [192.168.178.31] (f054005083.adsl.alicedsl.de [78.54.5.83]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) (Authenticated sender: fmjv004) by mailhost.uni-hamburg.de (Postfix) with ESMTPSA id 7A42C90223; Fri, 9 Apr 2010 10:23:07 +0200 (CEST) Message-ID: <4BBEE3E7.8040201@janh.de> Date: Fri, 09 Apr 2010 10:23:03 +0200 From: Jan Henrik Sylvester User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.1.9) Gecko/20100331 Thunderbird/3.0.4 MIME-Version: 1.0 To: fs-list freebsd Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: AFS on FreeBSD 8? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 08:39:36 -0000 [Repost from freebsd-questions as I was told to ask here.] I tried to get an AFS client on my 8.0-RELEASE (or 8-STABLE) system. What is the status of AFS on FreeBSD? Neither OpenAFS nor Arla seem to be in ports. I found the freebsd-afs mailing list with many posting from 2008/Dec but nothing from 2009 or 2010. The port-freebsd list on openafs.org has nothing newer, either. http://wiki.freebsd.org/afs has instructions for Arla, but the build fails on 8.0-RELEASE. http://wiki.freebsd.org/afs-server seems to be even older. http://wiki.freebsd.org/AFS_using_OpenAFS_%2B_Arla gives me: "You are not allowed to view this page." Is there anything more current that I missed? Thanks, Jan Henrik From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 11:15:19 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E043D106564A for ; Fri, 9 Apr 2010 11:15:19 +0000 (UTC) (envelope-from me@lexasoft.ru) Received: from relay.wahome.ru (relay.wahome.ru [95.211.21.141]) by mx1.freebsd.org (Postfix) with ESMTP id A57AD8FC12 for ; Fri, 9 Apr 2010 11:15:19 +0000 (UTC) Received: from mmx.lexasoft.ru (mmx.lexasoft.ru [92.241.160.6]) by relay.wahome.ru (Postfix) with ESMTP id B94896B21CC for ; Fri, 9 Apr 2010 15:12:21 +0400 (MSD) Received: from [10.100.0.2] (petrovich-telecom-gw.wahome.ru [77.91.225.38]) by mmx.lexasoft.ru (Postfix) with ESMTPSA id 65DD928491 for ; Fri, 9 Apr 2010 15:15:18 +0400 (MSD) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Apple Message framework v1077) From: Alexey Tarasov In-Reply-To: <4BBEE533.2050507@icyb.net.ua> Date: Fri, 9 Apr 2010 15:15:17 +0400 Content-Transfer-Encoding: quoted-printable Message-Id: <96786FB1-DD39-4E22-B942-B87C15E164B0@lexasoft.ru> References: <201003291616.27838.Pascal.Stumpf@cubes.de> <9D752CC7-5CCA-454D-8BEC-F3D5E6F8445C@lexasoft.ru> <46D5776E-39F0-48A7-B1C0-B844BF5147C5@lexasoft.ru> <4BBEE533.2050507@icyb.net.ua> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Subject: Re: ZFS raidz and 4k sector disks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 11:15:20 -0000 Hello. I see considerably increased performance when creating over gnop -S 4096 = virtual disk. Even when I create zpool over raw disks the performance is = very bad and concurent writes stalls. When using gnop, zfs works VERY = fast! Btw, here is another discussion, may be there is a bug in a mav@ commit, = because he has added support for >512 sector size: = http://lists.freebsd.org/pipermail/freebsd-current/2010-April/016495.html =D0=9F=D1=80=D0=B5=D0=B2=D0=B5=D0=B4 =D0=A3=D0=BA=D1=80=D0=B0=D0=B8=D0=BD=D0= =B5! =3D) On 09.04.2010, at 12:28, Andriy Gapon wrote: > on 08/04/2010 16:55 Alexey Tarasov said the following: >> Hello. >>=20 >> I've tried all methods and realized that unfortunately the only = working >> method is gnop. So you can't use these disks for ZFS at all now. >=20 > Why? And what are you actually trying to do? > My understanding was that even with 512-byte sectors ZFS still aligns = its > on-disk data with > 4K alignment. > Do you see otherwise? What problem do you have? >=20 > --=20 > Andriy Gapon -- Alexey Tarasov (\__/)=20 (=3D'.'=3D)=20 E[: | | | | :]=D0=97=20 (")_(") From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 11:34:56 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1063D106566B for ; Fri, 9 Apr 2010 11:34:56 +0000 (UTC) (envelope-from bu7cher@yandex.ru) Received: from forward2.mail.yandex.net (forward2.mail.yandex.net [77.88.46.7]) by mx1.freebsd.org (Postfix) with ESMTP id B09858FC1B for ; Fri, 9 Apr 2010 11:34:55 +0000 (UTC) Received: from smtp4.mail.yandex.net (smtp4.mail.yandex.net [77.88.46.104]) by forward2.mail.yandex.net (Yandex) with ESMTP id 5F32738A9B49; Fri, 9 Apr 2010 15:24:12 +0400 (MSD) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1270812252; bh=fBkVz9oZDxaYIl9v2wbfZfKZZtMugzcB2r966XNVoYs=; h=Message-ID:Date:From:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=GcKNXkvqsxPT5xpaoxNKGF5eLO2e033r2PD7lWyp24blfgWZxjfQgm0Td4HJzGlLA 97LAqc4yLPVwhKvBPIeX93ehMBl0VChMD8+97V65KCD6xxs+A1ztHfeHuW+WLIw1XX 7prrND2Vm5I4eeCsuWX6MfgFUpijDKKuofsXuw5s= Received: from [127.0.0.1] (mail.kirov.so-cdu.ru [77.72.136.145]) by smtp4.mail.yandex.net (Yandex) with ESMTPSA id 2B3A0128079; Fri, 9 Apr 2010 15:24:12 +0400 (MSD) Message-ID: <4BBF0E5B.4030908@yandex.ru> Date: Fri, 09 Apr 2010 15:24:11 +0400 From: "Andrey V. Elsukov" User-Agent: Mozilla Thunderbird 1.5 (FreeBSD/20051231) MIME-Version: 1.0 To: Alexey Tarasov References: <201003291616.27838.Pascal.Stumpf@cubes.de> <9D752CC7-5CCA-454D-8BEC-F3D5E6F8445C@lexasoft.ru> <46D5776E-39F0-48A7-B1C0-B844BF5147C5@lexasoft.ru> <4BBEE533.2050507@icyb.net.ua> <96786FB1-DD39-4E22-B942-B87C15E164B0@lexasoft.ru> In-Reply-To: <96786FB1-DD39-4E22-B942-B87C15E164B0@lexasoft.ru> Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 7bit X-Yandex-TimeMark: 1270812252 X-Yandex-Spam: 1 X-Yandex-Front: smtp4.mail.yandex.net Cc: freebsd-fs@freebsd.org Subject: Re: ZFS raidz and 4k sector disks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 11:34:56 -0000 On 09.04.2010 15:15, Alexey Tarasov wrote: > Btw, here is another discussion, may be there is a bug in a mav@ commit, because he has added > support for>512 sector size: First of can you look to the commit log and understand what it made? -- WBR, Andrey V. Elsukov From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 11:36:20 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 24A7B106566B for ; Fri, 9 Apr 2010 11:36:20 +0000 (UTC) (envelope-from me@lexasoft.ru) Received: from relay.wahome.ru (relay.wahome.ru [95.211.21.141]) by mx1.freebsd.org (Postfix) with ESMTP id DCDC08FC0C for ; Fri, 9 Apr 2010 11:36:19 +0000 (UTC) Received: from mmx.lexasoft.ru (mmx.lexasoft.ru [92.241.160.6]) by relay.wahome.ru (Postfix) with ESMTP id 09D696B222F; Fri, 9 Apr 2010 15:33:22 +0400 (MSD) Received: from [10.100.0.2] (petrovich-telecom-gw.wahome.ru [77.91.225.38]) by mmx.lexasoft.ru (Postfix) with ESMTPSA id B99AF2848C; Fri, 9 Apr 2010 15:36:18 +0400 (MSD) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=koi8-r From: Alexey Tarasov In-Reply-To: <4BBF0E5B.4030908@yandex.ru> Date: Fri, 9 Apr 2010 15:36:17 +0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <201003291616.27838.Pascal.Stumpf@cubes.de> <9D752CC7-5CCA-454D-8BEC-F3D5E6F8445C@lexasoft.ru> <46D5776E-39F0-48A7-B1C0-B844BF5147C5@lexasoft.ru> <4BBEE533.2050507@icyb.net.ua> <96786FB1-DD39-4E22-B942-B87C15E164B0@lexasoft.ru> <4BBF0E5B.4030908@yandex.ru> To: "Andrey V. Elsukov" , freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Cc: Subject: Re: ZFS raidz and 4k sector disks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 11:36:20 -0000 On 09.04.2010, at 15:24, Andrey V. Elsukov wrote: > On 09.04.2010 15:15, Alexey Tarasov wrote: >> Btw, here is another discussion, may be there is a bug in a mav@ = commit, because he has added >> support for>512 sector size: >=20 > First of can you look to the commit log and understand what it made? http://svn.freebsd.org/viewvc/base?view=3Drevision&revision=3D198897 - Add support for sector size > 512 bytes and physical sector of several logical sectors, introduced by ATA-7 specification. May be I have misunderstood this log message? -- Alexey Tarasov (\__/)=20 (=3D'.'=3D)=20 E[: | | | | :]=FA=20 (")_(") From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 12:11:03 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9AC69106564A for ; Fri, 9 Apr 2010 12:11:03 +0000 (UTC) (envelope-from numisemis@yahoo.com) Received: from web112403.mail.gq1.yahoo.com (web112403.mail.gq1.yahoo.com [98.137.26.129]) by mx1.freebsd.org (Postfix) with SMTP id 5181F8FC1C for ; Fri, 9 Apr 2010 12:11:03 +0000 (UTC) Received: (qmail 82373 invoked by uid 60001); 9 Apr 2010 12:11:02 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1270815062; bh=Q0yAq1n7l6UM2lQIBQ8HqTBwXAd5kK2VWmK8ucTz/LU=; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding; b=dqP8nTBr1+Opa8jIimXGUZ7nDBgur85SmGewqjlsrn9fcqkkr3OD8ARyJsu+C/ojsZ2rILOMKQAk6SSLX7HsAXeEo+PN31X29NxEyrTUIZ+U4p3BvronLkhaZqhCUWmkRUHGzBOKBB4YWCqFIcnMzKnjuof2gqQ76exY1rYs3zQ= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding; b=X4DrhrLt1kh4TOy0iTGOs3rqJmJgk1MqiP2l06QHy84+ZwsLzbhy43STgReKucZv9GXhLacxkGl/dgvAa5ZLMBkeiyfCAWNsQGE4pa5j6/Eqx1wcWQ8/w4D8kET/+wrmhwXIAHA28E7ZwLI8+ppHPF32ht3GW97+VSWIb1eH8I8=; Message-ID: <839086.81099.qm@web112403.mail.gq1.yahoo.com> X-YMail-OSG: EIOP7PYVM1kzoIhowdEnx1uKJ4Bf5gi9tGXVYzxvxWurxUS yVRLKOxpxK8wirRn9MJ4S1O9gfthi4SiWgkqKmOmla5Sd2jLBqlg8hh7gpMp HoFqtGmz48zAx7hdPfhF9zJm3VH37.r0YPSMZGlnhIrHDrabvJp11sbwMTax 9JmE4qzGBgXayjTCCT.vskUYWxJijFktPwxHULAHHC7QEoTigdh0aXiUzKnE o86O8nPErO6Nb4vJYSsq74J3xypgOErrToWSUwW2Jdl7yqRwwqBcIgMPxdfB JBipaJHvLdOoVWP0OpeNqosBLWG6LorO2Plozr2jkjyNwfL24K2nq0C3ctjb cqTgzFxqOdvg8HOY8z4Vae7Y44Lqi8w-- Received: from [213.147.110.159] by web112403.mail.gq1.yahoo.com via HTTP; Fri, 09 Apr 2010 05:11:02 PDT X-Mailer: YahooMailWebService/0.8.100.260964 Date: Fri, 9 Apr 2010 05:11:02 -0700 (PDT) From: =?utf-8?B?xaBpbXVuIE1pa2VjaW4=?= To: Alexey Tarasov MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs Subject: Re: ZFS raidz and 4k sector disks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 12:11:03 -0000 =0Ahttp://svn.freebsd.org/viewvc/base?view=3Drevision&revision=3D198897=0A= =0A- Add support for sector size > 512 bytes and physical sector of several= =0Alogical sectors, introduced by ATA-7 specification.=0A=0AMay be I have m= isunderstood this log message?=0A=0A=0AThis commit is for HEAD (are you usi= ng HEAD?) and I suppose it doesn't work if you are using ata (disk name is = adX) driver instead of ahci (disk name is adaX).=0A=0A=0A=0A=0A From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 12:25:09 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EA90C1065675 for ; Fri, 9 Apr 2010 12:25:09 +0000 (UTC) (envelope-from me@lexasoft.ru) Received: from relay.wahome.ru (relay.wahome.ru [95.211.21.141]) by mx1.freebsd.org (Postfix) with ESMTP id B15788FC1A for ; Fri, 9 Apr 2010 12:25:09 +0000 (UTC) Received: from mmx.lexasoft.ru (mmx.lexasoft.ru [92.241.160.6]) by relay.wahome.ru (Postfix) with ESMTP id 665F26B2207; Fri, 9 Apr 2010 16:22:10 +0400 (MSD) Received: from [10.100.0.2] (petrovich-telecom-gw.wahome.ru [77.91.225.38]) by mmx.lexasoft.ru (Postfix) with ESMTPSA id C90392848C; Fri, 9 Apr 2010 16:25:06 +0400 (MSD) Mime-Version: 1.0 (Apple Message framework v1077) Content-Type: text/plain; charset=utf-8 From: Alexey Tarasov In-Reply-To: <839086.81099.qm@web112403.mail.gq1.yahoo.com> Date: Fri, 9 Apr 2010 16:25:05 +0400 Content-Transfer-Encoding: quoted-printable Message-Id: <55143944-1C7B-4151-8DDF-EBE51A8979F7@lexasoft.ru> References: <839086.81099.qm@web112403.mail.gq1.yahoo.com> To: =?windows-1252?Q?=8Aimun_Mikecin?= , freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1077) Cc: Subject: Re: ZFS raidz and 4k sector disks X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 12:25:10 -0000 > This commit is for HEAD (are you using HEAD?) and I suppose it doesn't = work if you are using ata (disk name is adX) driver instead of ahci = (disk name is adaX). It was MFC'ed to 8-STABLE. Will try ahci with fresh STABLE later. -- Alexey Tarasov (\__/)=20 (=3D'.'=3D)=20 E[: | | | | :]=D0=97=20 (")_(") From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 12:36:48 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1A35B106568B for ; Fri, 9 Apr 2010 12:36:48 +0000 (UTC) (envelope-from sarawgi.aditya@gmail.com) Received: from mail-gy0-f182.google.com (mail-gy0-f182.google.com [209.85.160.182]) by mx1.freebsd.org (Postfix) with ESMTP id CC3D68FC18 for ; Fri, 9 Apr 2010 12:36:47 +0000 (UTC) Received: by gyh20 with SMTP id 20so1941948gyh.13 for ; Fri, 09 Apr 2010 05:36:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:content-type; bh=L+lKier1xJ+0jWdPooAGnApANrXtwkf1bSYRlIegO/Y=; b=wEcr4Pr9Rb81HNfgSAjgqMhajBMTtND7by7BQPinuzVGhWgStvXOUhM7d1BWWC8mgl IFOB6hsUkT0fyxhlPUY7dL09tnRCqqOwoyr/6LfTk7yTpelBgV4DNd42Oku2TP60MSBM /6iSLPYH4Vv3GhAU7WGqF/odp7DAfnq4wY5OQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=Dk+/ur0nIxhCxo8jrTKkJ4GuB4rkgKj3ePbwKwX1i8ZIqQFCv1+P85gNnmfA21L6yV jbwjvKlQGJJ/m5a88CDd8oPxggyPJTFKdIzpxap3nj0XdXPxLxkbErgl8ZWcu3uJWOTy A+B/AMXVaxqm+yhrr71v9Zj5tUUnxwWvOdYjM= MIME-Version: 1.0 Received: by 10.231.168.147 with HTTP; Fri, 9 Apr 2010 05:36:44 -0700 (PDT) In-Reply-To: <4BBEE3E7.8040201@janh.de> References: <4BBEE3E7.8040201@janh.de> Date: Fri, 9 Apr 2010 13:36:44 +0100 Received: by 10.101.181.33 with SMTP id i33mr2634244anp.101.1270816604814; Fri, 09 Apr 2010 05:36:44 -0700 (PDT) Message-ID: From: aditya sarawgi To: Jan Henrik Sylvester , fs-list freebsd Content-Type: text/plain; charset=ISO-8859-1 Cc: Subject: Re: AFS on FreeBSD 8? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 12:36:48 -0000 Hi, openafs-client is not stable on freebsd 8, or 9 but i think it is usable on freebsd 7.X. On 4/9/10, Jan Henrik Sylvester wrote: > [Repost from freebsd-questions as I was told to ask here.] > > I tried to get an AFS client on my 8.0-RELEASE (or 8-STABLE) system. > > What is the status of AFS on FreeBSD? > > Neither OpenAFS nor Arla seem to be in ports. > > I found the freebsd-afs mailing list with many posting from 2008/Dec but > nothing from 2009 or 2010. The port-freebsd list on openafs.org has > nothing newer, either. > > http://wiki.freebsd.org/afs has instructions for Arla, but the build > fails on 8.0-RELEASE. > > http://wiki.freebsd.org/afs-server seems to be even older. > > http://wiki.freebsd.org/AFS_using_OpenAFS_%2B_Arla gives me: "You are > not allowed to view this page." > > Is there anything more current that I missed? > > Thanks, > Jan Henrik > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- Sent from my mobile device Cheers, Aditya Sarawgi From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 14:26:03 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2F4051065672; Fri, 9 Apr 2010 14:26:03 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 2D97C8FC0A; Fri, 9 Apr 2010 14:26:01 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id RAA20592; Fri, 09 Apr 2010 17:25:59 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4BBF38F7.80205@freebsd.org> Date: Fri, 09 Apr 2010 17:25:59 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.24 (X11/20100319) MIME-Version: 1.0 To: freebsd-fs@freebsd.org, freebsd-geom@freebsd.org References: <201003261528.o2QFSAuI037251@chez.mckusick.com> <4BB4D9FB.3000706@freebsd.org> In-Reply-To: <4BB4D9FB.3000706@freebsd.org> X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Subject: Re: g_vfs_open and bread(devvp, ...) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 14:26:03 -0000 I came up with a small demonstration for why I bothered with this issue at all. To reproduce the experiment you would need this image: http://people.freebsd.org/~avg/test_img.gz and avgfs:) http://people.freebsd.org/~avg/avgfs/ The image needs to be gunzip-ed, of course. avgfs:) needs to be compiled, avgfs.ko loaded. The demonstration is to be executed on an unpatched system, e.g. head before r205860, or a branch other than head, or recent head with r206097 and r205860 reverted. Then do the following. I. Inspect test image. $ hd test_img.img | more II. Test bread() through avgfs vnode. 1. Present the image as a disk with 2K sector size mdconfig -a -t vnode -f test_img.img -S 2048 -u 0 2. Mount the image using avgfs:) mount -t avg /dev/md0 /mnt 3. Read some data blocks in one go and examine the result. dd if=/mnt/thefile bs=10k count=1 | hd 4. Re-read the same data using 2K blocks and examine the result. dd if=/mnt/thefile bs=2k count=5 | hd 5. Cleanup. umount /mnt III. Test bread() through devvp. 1. Mount the image using avgfs:) with devvp option. mount -t avg -o devvp /dev/md0 /mnt 2. Read some data blocks in one go and examine the result. dd if=/mnt/thefile bs=10k count=1 | hd 3. Re-read the same data using 2K blocks and examine the result. dd if=/mnt/thefile bs=2k count=5 | hd 4. Cleanup. umount /mnt mdconfig -d -u 0 kldunload avgfs SPOILER. In my testing only III.3 produces an unexpected result: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000800 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 |................| * 00001000 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 |................| * 00001800 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 |................| * 00002000 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 |................| * The result is explained here: http://lists.freebsd.org/pipermail/freebsd-fs/2008-February/004268.html Repeat the experiment on a patched system. See if the change was worth bothering. P.S. avgfs:) is explained here: http://permalink.gmane.org/gmane.os.freebsd.devel.file-systems/8886 -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Fri Apr 9 19:38:28 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 527FA106566C for ; Fri, 9 Apr 2010 19:38:28 +0000 (UTC) (envelope-from scode@scode.org) Received: from mail-ww0-f54.google.com (mail-ww0-f54.google.com [74.125.82.54]) by mx1.freebsd.org (Postfix) with ESMTP id E78C78FC15 for ; Fri, 9 Apr 2010 19:38:27 +0000 (UTC) Received: by wwb24 with SMTP id 24so1125591wwb.13 for ; Fri, 09 Apr 2010 12:38:26 -0700 (PDT) MIME-Version: 1.0 Sender: scode@scode.org Received: by 10.216.50.11 with HTTP; Fri, 9 Apr 2010 12:38:26 -0700 (PDT) X-Originating-IP: [213.114.159.69] In-Reply-To: References: <4BBD15E7.5010006@xpam.de> Date: Fri, 9 Apr 2010 21:38:26 +0200 X-Google-Sender-Auth: 1c7707506552833f Received: by 10.216.86.209 with SMTP id w59mr269951wee.186.1270841906594; Fri, 09 Apr 2010 12:38:26 -0700 (PDT) Message-ID: From: Peter Schuller To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: Re: ZFS arc sizing (maybe related to kern/145229) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Apr 2010 19:38:28 -0000 For the record I found this: http://svn.freebsd.org/viewvc/base/head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c?r1=197816&r2=197815&pathrev=197816 Presumably this should prevent the ARC going below the asked-for minimum, though it doesn't fix the fundamental balancing issue between ZFS ARC and the rest of the system. -- / Peter Schuller