From owner-freebsd-fs@FreeBSD.ORG Sun Dec 16 12:13:20 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A9E0BD0F for ; Sun, 16 Dec 2012 12:13:20 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id D439C8FC0A for ; Sun, 16 Dec 2012 12:13:19 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 0603EF019CD; Sun, 16 Dec 2012 13:13:17 +0100 (CET) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 19.0270] X-CRM114-CacheID: sfid-20121216_13131_4F8E0903 X-CRM114-Status: Good ( pR: 19.0270 ) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Sun Dec 16 13:13:17 2012 X-DSPAM-Confidence: 0.9960 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 50cdbadd817965918320430 X-DSPAM-Factors: 27, From*Attila Nagy , 0.00010, >+On, 0.00069, MFC, 0.00096, UTC, 0.00099, wrote+>>, 0.00168, >>+>, 0.00185, wrote+>, 0.00225, >+>, 0.00359, >+>, 0.00359, In-Reply-To*mail.gmail.com>, 0.00395, References*mail.gmail.com>, 0.00416, References*fsn.hu>, 0.00426, References*fsn.hu>, 0.00426, the+code, 0.00426, >+And, 0.00426, >>+On, 0.00461, svn, 0.00461, revision, 0.00503, can+I, 0.00503, 11+29, 0.00552, wrote, 0.00592, wrote, 0.00592, stack, 0.00613, wrote+>>>, 0.00613, I+get, 0.00669, >>>+>>>, 0.00690, X-Spambayes-Classification: ham; 0.00 Received: from [192.168.3.2] (japan.t-online.co.hu [195.228.243.99]) by people.fsn.hu (Postfix) with ESMTPSA id F2DE3F019BE; Sun, 16 Dec 2012 13:13:15 +0100 (CET) Message-ID: <50CDBAD9.6010406@fsn.hu> Date: Sun, 16 Dec 2012 13:13:13 +0100 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Damien Fleuriot Subject: Re: zfs panic, solaris_assert References: <50CC86D4.2070502@fsn.hu> <50CC8AE5.5070804@fsn.hu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Dec 2012 12:13:20 -0000 On 12/15/2012 10:05 PM, Damien Fleuriot wrote: > On 15 December 2012 15:36, Attila Nagy wrote: >> On 12/15/2012 03:19 PM, Attila Nagy wrote: >>> Hi, >>> >>> Since running svn revision r243704 I get frequent panics: >>> panic: solaris assert: sa.sa_magic == 0x2F505A (0x4f22a8ed == 0x2f505a), >>> file: >>> /data/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c, >>> line: 597 >>> cpuid = 0 >>> KDB: stack backtrace: >>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2a >>> kdb_backtrace() at kdb_backtrace+0x37 >>> panic() at panic+0x1ce >>> assfail3() at assfail3+0x29 >>> zfs_space_delta_cb() at zfs_space_delta_cb+0xbe >>> dmu_objset_userquota_get_ids() at dmu_objset_userquota_get_ids+0x142 >>> dnode_sync() at dnode_sync+0xc5 >>> dmu_objset_sync_dnodes() at dmu_objset_sync_dnodes+0x5d >>> dmu_objset_sync() at dmu_objset_sync+0x17f >>> dsl_pool_sync() at dsl_pool_sync+0xca >>> spa_sync() at spa_sync+0x34a >>> txg_sync_thread() at txg_sync_thread+0x139 >>> fork_exit() at fork_exit+0x11f >>> fork_trampoline() at fork_trampoline+0xe >>> --- trap 0, rip = 0, rsp = 0xffffff90231accf0, rbp = 0 --- >>> >>> I can't tell whether it's the data or the code. If the latter, is this >>> fixed in later revisions? >>> If it's the file system, what can I do with this? >>> >> It seems this is introduced with the following mega-MFC: >> r243674 | mm | 2012-11-29 15:05:04 +0100 (Thu, 29 Nov 2012) | 223 lines >> > For what it's worth, running on amd64 with 4gb ram: > 10-CURRENT r244183: Thu Dec 13 15:35:28 UTC 2012 > > And not experiencing ZFS/solaris problems. > > Loader tunables: > vm.kmem_size="3072M" > vfs.zfs.arc_min="128M" > vfs.zfs.arc_max="2048M" I think it's related to the load and/or the on disk data.