Date: Sun, 14 Aug 2022 18:40:49 -0700 From: Mark Millard <marklmi@yahoo.com> To: Nuno Teixeira <eduardo@freebsd.org> Cc: FreeBSD Mailing List <freebsd-ports@freebsd.org> Subject: Re: Resolved: devel/llvm13 build: "ninja: build stopped: subcommand failed" Message-ID: <B8C17283-0C5E-4D84-B10F-0712B26BDCB9@yahoo.com> In-Reply-To: <7CDC63F3-8B68-420E-8012-B1692667E293@yahoo.com> References: <1D4C14BD-8955-4B86-9C99-3E58D7603122.ref@yahoo.com> <1D4C14BD-8955-4B86-9C99-3E58D7603122@yahoo.com> <CAFDf7UK-pAFXCrZZA9veASaa-wf9HKMdX52fxmcmDgRFiNOF7A@mail.gmail.com> <7CDC63F3-8B68-420E-8012-B1692667E293@yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 2022-Aug-14, at 09:35, Mark Millard <marklmi@yahoo.com> wrote: > On 2022-Aug-14, at 07:50, Nuno Teixeira <eduardo@freebsd.org> wrote: >=20 > . . . >> have some time now and it's caused by a build peak of memory that = affects people with less than 32/64GB mem and to solve building it must = be build using one builder with one core thats takes about 7 hours on my = machine or with 6c+6t on 12.3 i386 that takes about 45min (123i386 is = the only jail that I can use all cores). >=20 > Last I tried I built all the various devel/llvm* on a 8 GiByte > RPi4B, 4 builders active and ALLOW_MAKE_JOBS=3Dyes in use. > 4 FreeBSD cpus. So the load average would have been around 16+ > much of the time during devel/llvm13 's builder activity. > USE_TMPFS=3Ddata in use. >=20 > Similarly for a 16 GiByte machine --but it is also an aarch64 > context, also 4 FreebSD cpus. >=20 > But I use in /boot/loader.conf: >=20 > # > # Delay when persistent low free RAM leads to > # Out Of Memory killing of processes: > vm.pageout_oom_seq=3D120 >=20 > This has been historically important to avoiding the likes of > "was killed: failed to reclaim memory" and related notices on > various armv7 and aarch64 small board computers used to > buildworld buildkernel and build ports, using all the cores. >=20 > The only amd64 system that I've access to has 32 FreeBSD cpus > and 128 GiBytes of RAM. Not a good basis for a comparison test > with your context. I've no i386 access at all. >=20 >> llvm 12 build without problems >=20 > Hmm. I'll try building devel/llvm13 on aarch64 with periodic > sampling of the memory use to see maximum observed figures > for SWAP and for various categories of RAM, as well as the > largest observed load averages. >=20 > ZFS context use. I could try UFS as well. >=20 > Swap: 30720Mi Total on the 8GiByte RPi4B. > So about 38 GiBytes RAM+SWAP available. > We should see how much SWAP is used. >=20 > Before starting poudriere, shortly after a reboot: >=20 > 19296Ki MaxObs(Act+Lndry+SwapUsed) > (No SWAP in use at the time.) >=20 > # poudriere bulk -jmain-CA72-bulk_a -w devel/llvm13 >=20 > for the from scratch build: reports: >=20 > [00:00:34] Building 91 packages using up to 4 builders >=20 > The ports tree is about a month back: >=20 > # ~/fbsd-based-on-what-commit.sh -C /usr/ports/ > branch: main > merge-base: 872199326a916efbb4bf13c97bc1af910ba1482e > merge-base: CommitDate: 2022-07-14 01:26:04 +0000 > 872199326a91 (HEAD -> main, freebsd/main, freebsd/HEAD) = devel/ruby-build: Update to 20220713 > n589512 (--first-parent --count for merge-base) >=20 > But, if I gather right, the problem you see goes back > before that. >=20 > I can not tell how 4 FreeBSD cpus compares to the > count that the Lenovo Legion 5 gets. >=20 > I'll report on its maximum observed figures once the > build stops. It will be a while before the RPi4B > gets that far. >=20 > The ports built prior to devel/llvm13's builder starting > will lead to load averages over 4 from up to 4 > builders, each potentially using up to around 4 > processes. I'll see about starting a separate tracking > once devel/llvm13 's builder has started if I happen > to observe it at the right time frame for doing such. >=20 > . . . I actually have tried a few builds on different machines. The 8GiByte RPi4B takes a long time and is currently omitted from this report. 128 GiByte amd64 ThreadRipper 1950X (16 cores, so 32 FreeBSD cpus): but using MAKE_JOBS_NUMBER=3D4 (with both FLANG and MLIR) On amd64 I started a build with FLANG and MLIR enabled, using MAKE_JOBS_NUMBER=3D4 in devel/llvm13/Makefile to limit the build to 4 FreeBSD cpus. It is a ZFS context. Given the 128 GiBytes of RAM, there will not be much for effects of memory-pressure. But will record the=20 MaxObs(Act+Lndry+SwapUsed) and the like. ---Begin OPTIONS List--- =3D=3D=3D> The following configuration options are available for = llvm13-13.0.1_3: BE_AMDGPU=3Don: AMD GPU backend (required by mesa) BE_WASM=3Don: WebAssembly backend (required by firefox via wasi) CLANG=3Don: Build clang COMPILER_RT=3Don: Sanitizer libraries DOCS=3Don: Build and/or install documentation EXTRAS=3Don: Extra clang tools FLANG=3Don: Flang FORTRAN compiler GOLD=3Don: Build the LLVM Gold plugin for LTO LIT=3Don: Install lit and FileCheck test tools LLD=3Don: Install lld, the LLVM linker LLDB=3Don: Install lldb, the LLVM debugger MLIR=3Don: Multi-Level Intermediate Representation OPENMP=3Don: Install libomp, the LLVM OpenMP runtime library PYCLANG=3Don: Install python bindings to libclang =3D=3D=3D=3D> Options available for the single BACKENDS: you have to = select exactly one of them BE_FREEBSD=3Doff: Backends for FreeBSD architectures BE_NATIVE=3Doff: Backend(s) for this architecture (X86) BE_STANDARD=3Don: All non-experimental backends =3D=3D=3D> Use 'make config' to modify these settings ---End OPTIONS List--- [02:23:55] [01] [02:04:29] Finished devel/llvm13 | llvm13-13.0.1_3: = Success For just the devel/llvm13 builder activity, no parallel builds and excluding the prerequisites being built: load averages: . . . MaxObs: 6.76, 4.75, 4.38 6812Mi MaxObs(Act+Lndry+SwapUsed) but no use of SWAP observed. Note: MAKE_JOBS_NUMBER does not constrain any lld=20 procoess from using all available FreeBSD cpus (via threading) --and multiple lld's can be active at the same time. So this looks to fit in a 16 GiByte RAM context just fine, no SWAP needed. I'll try MAKE_JOBS_NUMBER=3D12 instead and rerun on the same machine. 128 GiByte amd64 ThreadRipper 1950X (16 cores, so 32 FreeBSD cpus): but using MAKE_JOBS_NUMBER=3D12 (with both FLANG and MLIR) ---Begin OPTIONS List--- =3D=3D=3D> The following configuration options are available for = llvm13-13.0.1_3: BE_AMDGPU=3Don: AMD GPU backend (required by mesa) BE_WASM=3Don: WebAssembly backend (required by firefox via wasi) CLANG=3Don: Build clang COMPILER_RT=3Don: Sanitizer libraries DOCS=3Don: Build and/or install documentation EXTRAS=3Don: Extra clang tools FLANG=3Don: Flang FORTRAN compiler GOLD=3Don: Build the LLVM Gold plugin for LTO LIT=3Don: Install lit and FileCheck test tools LLD=3Don: Install lld, the LLVM linker LLDB=3Don: Install lldb, the LLVM debugger MLIR=3Don: Multi-Level Intermediate Representation OPENMP=3Don: Install libomp, the LLVM OpenMP runtime library PYCLANG=3Don: Install python bindings to libclang =3D=3D=3D=3D> Options available for the single BACKENDS: you have to = select exactly one of them BE_FREEBSD=3Doff: Backends for FreeBSD architectures BE_NATIVE=3Doff: Backend(s) for this architecture (X86) BE_STANDARD=3Don: All non-experimental backends =3D=3D=3D> Use 'make config' to modify these settings ---End OPTIONS List--- [00:55:37] [01] [00:55:30] Finished devel/llvm13 | llvm13-13.0.1_3: = Success load averages: . . . MaxObs: 12.45, 12.20, 11.52 13074Mi MaxObs(Act+Lndry+SwapUsed) but no use of SWAP observed. Note: MAKE_JOBS_NUMBER does not constrain any lld=20 procoess from using all available FreeBSD cpus (via threading) --and multiple lld's can be active at the same time. (16+4)*1024 Mi - 13074 Mi =3D=3D 7406 Mi for other RAM+SWAP use. (Crude estimates relative to your context.) That would seem to be plenty. Conclusion: It is far from clear what all was contributing to your (16+4)*1024 MiBytes proving to be insufficient. Unintentional tmpfs use, such as a typo in USE_TMPFS in /usr/local/etc/poudriere.conf ? I really have no clue: the example is arbitrary. Other notes: # uname -apKU FreeBSD amd64_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #50 = main-n256584-5bc926af9fd1-dirty: Wed Jul 6 17:44:43 PDT 2022 = root@amd64_ZFS:/usr/obj/BUILDs/main-amd64-nodbg-clang/usr/main-src/amd64.a= md64/sys/GENERIC-NODBG amd64 amd64 1400063 1400063 Note that the above is without WITNESS and without INVARIANTS and the = like. The only thing commited to main's contrib/llvm-project after that was 9ef1127008 : QUOTE Apply tentative llvm fix for avoiding fma on PowerPC SPE Merge llvm review D77558, by Justin Hibbits: PowerPC: Don't hoist float multiply + add to fused operation on SPE SPE doesn't have a fmadd instruction, so don't bother hoisting a multiply and add sequence to this, as it'd become just a library call. Hoisting happens too late for the CTR usability test to veto using the = CTR in a loop, and results in an assert "Invalid PPC CTR loop!". END QUOTE =3D=3D=3D Mark Millard marklmi at yahoo.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?B8C17283-0C5E-4D84-B10F-0712B26BDCB9>