From owner-freebsd-current@FreeBSD.ORG Fri May 30 06:10:05 2014 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5395261A for ; Fri, 30 May 2014 06:10:05 +0000 (UTC) Received: from sam.nabble.com (sam.nabble.com [216.139.236.26]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 34D0B227A for ; Fri, 30 May 2014 06:10:04 +0000 (UTC) Received: from [192.168.236.26] (helo=sam.nabble.com) by sam.nabble.com with esmtp (Exim 4.72) (envelope-from ) id 1WqG0x-0005f8-8u for freebsd-current@freebsd.org; Thu, 29 May 2014 23:10:03 -0700 Date: Thu, 29 May 2014 23:10:03 -0700 (PDT) From: Beeblebrox To: freebsd-current@freebsd.org Message-ID: <1401430203270-5916406.post@n5.nabble.com> In-Reply-To: <53875E91.1080002@freebsd.org> References: <1401356463384-5916161.post@n5.nabble.com> <201405290908.10274.jhb@freebsd.org> <20140529095722.1765ce36@kan> <201405291204.02160.jhb@freebsd.org> <53875E91.1080002@freebsd.org> Subject: Re: Memory blackhole in 11. Possibly libc.so.7? MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 May 2014 06:10:05 -0000 I'm replying late, but last night I almost had a complete meltdown of the system. I did a partial "pkg upgrade" for the packages that managed to get built by poudriere, then all hell broke loose. Screen lock-ups, random reboots, and several hard reboots later I decided to do a fresh buildworld/buildkernel and we seem to be back at some level of normality. It makes no sense, I know; and I would not be able to tell you what it was that went wrong. What I can tell you is that * While compiling, code would compile for a while, then freeze, then continue where it left off. "# top -P" was showing that during these freeze-ups, of the 4 cores on my system, 3 were mostly idle (80% to 95%) while one core would be at or near 0% idle. Not always the same core, and when one core got freed-up, you could have another core drop to 0%-8% idle. I thought maybe radeon was using cpu instead of gpu but that was debunked when same behavior was observed while compiling my regular kernel without radeon / xorg loaded. * Initially a debug-kernel was built with below, but screen goes black when radeon / xorg is loaded. (include GENERIC \ ident KERNDEBUG \ nooptions INET6 \ options KDB \ options DDB \ options GDB \ options INVARIANTS \ options INVARIANT_SUPPORT \ options WITNESS \ options DEBUG_LOCKS \ options DEBUG_VFS_LOCKS \ options DIAGNOSTIC) * I have modified my zfs-related entries in loader.conf, and latest is: vfs.zfs.trim.enabled=1 #Enable_Trim vfs.zfs.prefetch_disable=0 #I have 4G of Ram #vfs.zfs.arc_min="512M" #Ram 4GB => 512, Ram 8GB => 1024 #vfs.zfs.arc_max="1536M" #TotalRam x 0.5 - 512 MB #vm.kmem_size="6G" #Ram x 1.5 #vfs.zfs.vdev_max_pending="1" Currently, kstat.zfs.misc.arcstats.size: 2012423600 and the number seems to hang around in that vicinity. Separate ZFS question then: I have 2 sata-III HDD's (64G SSD + 7200rpm spindle) and 3 zpools. System is my personal desktop running sql, http server etc but no commercial load. tank-b: root, usr, var on SSD tank-d: home, all data, sql-db, NFS exported PXE root tank-a: all code compile (poudriere, world), located nearest to the center of spindle HDD + has 3GB ZIL from SSD drive. Q1: Is it ok to assume that arcstat.size will not change much regardless of the number of zpools? Q2: I have 3GB free space on the SSD reserved for an L2ARC, but decided it was not necessary after reading that this would be mostly useful for a commercial web server for example. Was my assessment incorrect and will the system benefit from a 3GB (or larger?) L2ARC on SSD? If so, which pool (not sure tank-b makes sense since it's already fully on the same SSD): Regards. ----- FreeBSD-11-current_amd64_root-on-zfs_RadeonKMS -- View this message in context: http://freebsd.1045724.n5.nabble.com/Memory-blackhole-in-11-Possibly-libc-so-7-tp5916161p5916406.html Sent from the freebsd-current mailing list archive at Nabble.com.