From owner-freebsd-current@freebsd.org Mon Aug 20 23:55:37 2018 Return-Path: Delivered-To: freebsd-current@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3FB4E1081591 for ; Mon, 20 Aug 2018 23:55:37 +0000 (UTC) (envelope-from freebsd-rwg@pdx.rh.CN85.dnsmgr.net) Received: from pdx.rh.CN85.dnsmgr.net (br1.CN84in.dnsmgr.net [69.59.192.140]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7FFA789256 for ; Mon, 20 Aug 2018 23:55:36 +0000 (UTC) (envelope-from freebsd-rwg@pdx.rh.CN85.dnsmgr.net) Received: from pdx.rh.CN85.dnsmgr.net (localhost [127.0.0.1]) by pdx.rh.CN85.dnsmgr.net (8.13.3/8.13.3) with ESMTP id w7KNtXtR077067; Mon, 20 Aug 2018 16:55:33 -0700 (PDT) (envelope-from freebsd-rwg@pdx.rh.CN85.dnsmgr.net) Received: (from freebsd-rwg@localhost) by pdx.rh.CN85.dnsmgr.net (8.13.3/8.13.3/Submit) id w7KNtXYR077066; Mon, 20 Aug 2018 16:55:33 -0700 (PDT) (envelope-from freebsd-rwg) From: "Rodney W. Grimes" Message-Id: <201808202355.w7KNtXYR077066@pdx.rh.CN85.dnsmgr.net> Subject: Re: building LLVM threads gets killed In-Reply-To: To: blubee blubeeme Date: Mon, 20 Aug 2018 16:55:33 -0700 (PDT) CC: marklmi@yahoo.com, FreeBSD current X-Mailer: ELM [version 2.4ME+ PL121h (25)] MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Aug 2018 23:55:37 -0000 > On Mon, Aug 20, 2018 at 11:04 PM Mark Millard wrote: > > > Rodney W. Grimes freebsd-rwg at pdx.rh.CN85.dnsmgr.net wrote on > > Mon Aug 20 14:26:55 UTC 2018 : > > > > > > On 20 Aug 2018, at 05:01, blubee blubeeme > > wrote: > > > > > > > > > > I am running current compiling LLVM60 and when it comes to linking > > > > > basically all the processes on my computer gets killed; Chrome, > > Firefox and > > > > > some of the LLVM threads as well > > > > ... > > > > > llvm/build % ninja -j8 > > > > > [2408/2473] Building CXX object > > > > > lib/Passes/CMakeFiles/LLVMPasses.dir/PassBuilder.cpp.o > > > > > FAILED: lib/Passes/CMakeFiles/LLVMPasses.dir/PassBuilder.cpp.o > > > > > /usr/bin/c++ -DGTEST_HAS_RTTI=0 -D_DEBUG -D__STDC_CONSTANT_MACROS > > > > > -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -Ilib/Passes > > -I../lib/Passes > > > > > -Iinclude -I../include -isystem /usr/local/include -fPIC > > > > > -fvisibility-inlines-hidden -Werror=date-time > > > > > -Werror=unguarded-availability-new -std=c++11 -Wall -Wextra > > > > > -Wno-unused-parameter -Wwrite-strings -Wcast-qual > > > > > -Wmissing-field-initializers -pedantic -Wno-long-long > > > > > -Wcovered-switch-default -Wnon-virtual-dtor -Wdelete-non-virtual-dtor > > > > > -Wstring-conversion -fdiagnostics-color -g -fno-exceptions > > -fno-rtti -MD > > > > > -MT lib/Passes/CMakeFiles/LLVMPasses.dir/PassBuilder.cpp.o -MF > > > > > lib/Passes/CMakeFiles/LLVMPasses.dir/PassBuilder.cpp.o.d -o > > > > > lib/Passes/CMakeFiles/LLVMPasses.dir/PassBuilder.cpp.o -c > > > > > ../lib/Passes/PassBuilder.cpp > > > > > c++: error: unable to execute command: Killed > > > > > > > > It is running out of RAM while running multiple parallel link jobs. If > > > > you are building using WITH_DEBUG, turn that off, it consumes large > > > > amounts of memory. If you must have debug info, try adding the > > > > following flag to the CMake command line: > > > > > > > > -D LLVM_PARALLEL_LINK_JOBS:STRING="1" > > > > > > > > That will limit the amount of parallel link jobs to 1, even if you > > > > specify -j 8 to gmake or ninja. > > > > > > > > Brooks, it would not be a bad idea to always use this CMake flag in the > > > > llvm ports. :) > > > > > > And this may also fix the issues that all the small > > > memory (aka, RPI*) buliders are facing when trying > > > to do -j4? > > > > It may help, but: > > > > Even just compiles with no links running can get the kills > > in such small system contexts. > > > > And going for a simpler context that can demonstrate > > the behavior . . . > > > > Taking a Pine64+ 2GB as an example (4 cores with 1 > > HW-thread per core, 2 GiBytes of RAM, USB device for > > root file system and swap partition): > > > > In another login: > > # stress -d 2 -m 4 --vm-keep --vm-bytes 536870912 > > > > That "4" and "536870912" total to the 2 GiBytes so > > swapping is induced for the context in question. > > (Scale --vm-bytes appropriately to context.) > > [Note: I had 3 GiBytes of swap space in a partition > > for the below.] > > > > [stress is from the port sysutils/stress .] > > > > I had left the default vm.pageout_oom_seq=12 in place for this, > > making the kills easier to get than the 120 figure would. It > > does not take very long generally for some sort of message to > > show up. Sometimes kills happen: > > > > My test environment has Mark Johnston patches to report > > things not normally reported: > > > > waited 9s for async swap write > > waited 9s for swap buffer > > waited 9s for async swap write > > waited 9s for async swap write > > waited 9s for async swap write > > v_free_count: 1357, v_inactive_count: 1 > > Aug 20 06:04:27 pine64 kernel: pid 1010 (stress), uid 0, was killed: out > > of swap space > > waited 5s for async swap write > > waited 5s for swap buffer > > waited 5s for async swap write > > waited 5s for async swap write > > waited 5s for async swap write > > waited 13s for async swap write > > waited 12s for swap buffer > > waited 13s for async swap write > > waited 12s for async swap write > > waited 12s for async swap write > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 161177, size: 131072 > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 164766, size: 65536 > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 164064, size: 12288 > > . . . > > > > (I made multiple runs, most manually stopped. None run for > > all that long.) > > > > (Warning: the swap space part of "killed: out of swap space" > > can be a misnomer. Killing is driven by having low free RAM > > for sufficiently long. vm.pageout_oom_seq controls how > > long. Swap space may be unused, little used, or actually be > > low. With 3 GiBytes of swap space in the partition, these > > runs were not low on swap space.) > > > > Even with fairly stable ms/w figures the queue depths can > > get to be large, implying long times before the last queued > > write completes. For example (for a USB storage device for > > the root file system and swap partition): > > > > dT: 1.006s w: 1.000s > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps > > ms/d %busy Name > > 0 0 0 0 0.0 0 0 0.0 0 0 > > 0.0 0.0| mmcsd0 > > 56 312 0 0 0.0 312 19985 142.6 0 0 > > 0.0 99.6| da0 > > > > Large quantities of bytes are being queued in a short time > > relative to the around 20 MiByte/sec figure that can be > > sustained. The quantity is spread over the queue entries. > > > > > > Note: The Pine46+ 2GB builds devel/llvm60 in 14.5hr just fine > > with vm.pageout_oom_seq=120 when using all 4 cores. But it > > has twice the RAM as a rpi3/rpi2 while having the same number > > of cores. It simply does not have as much to write out as > > often. (The same sort of point goes for buildworld buildkernel > > sorts of activities.) > > > > > > === > > Mark Millard > > marklmi at yahoo.com > > ( dsl-only.net went > > away in early 2018-Mar) > > > > _______________________________________________ > > freebsd-current@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > > > I'd just like to clarify that I am not on what should be considered a > limited system like the RPI. > > I have: > hw.model: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz > 32GB of ram and a 256GB m.2 ssd. And probably running ZFS, which ate all your memory and put you in a memory contrained environment. You can have 384GB of ram and be memory contrained if you let something eat that memory up in a way that the pager can not get to it. -- Rod Grimes rgrimes@freebsd.org