Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 8 Dec 2025 02:15:33 +0200
From:      Konstantin Belousov <kib@freebsd.org>
To:        Mateusz Guzik <mjguzik@gmail.com>
Cc:        Warner Losh <imp@bsdimp.com>, Mark Millard <marklmi@yahoo.com>, FreeBSD Current <freebsd-current@freebsd.org>, FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: performance regressions in 15.0
Message-ID:  <aTYYpcYm8uOU1M_q@kib.kiev.ua>
In-Reply-To: <CAGudoHEztjmAb2uxRYK-CDjUBx6kEoeKDUgF8R4UvhoNp3A4_w@mail.gmail.com>
References:  <EF95C136-B1D2-4820-A069-D0078A3B5A05@yahoo.com> <18FB2858-5CBB-4B7A-8089-224A58C6A160@yahoo.com> <CANCZdfqfXfzGQRN5TR7KFcNE1-Ng4ECFKD_6V0118b2UwwX09Q@mail.gmail.com> <CAGudoHEztjmAb2uxRYK-CDjUBx6kEoeKDUgF8R4UvhoNp3A4_w@mail.gmail.com>

index | next in thread | previous in thread | raw e-mail

On Sun, Dec 07, 2025 at 11:30:41AM +0100, Mateusz Guzik wrote:
> On Sat, Dec 6, 2025 at 11:26 PM Warner Losh <imp@bsdimp.com> wrote:
> > A few months before I landed the jemalloc patches, i did 4 or 5 from dirt buildworlds. The elasped time was, iirc, with 1 or 2%. Enough to see maybe a diff with the small sample size, but not enough for ministat to trigger at 95%. I didn't recall keeping the data for this and can't find it now. And I'm not even sure, in hindsight, I ran a good experiment. It might be related, or not, but it would be easy enough for someone to setup a two jails: one just before and one just after. Build from scratch the world (same hash) on both. That would test it since you'd be holding all other variables constant.
> >
> > When we imported the tip of FreeBSD main at work, we didn't get a cpu change trigger from our tests that I recall...
> >
> 
> Note you probably build tested with clang which was already penalized.
> 
> I just verified that going to libc as of this commit:
> commit c43cad87172039ccf38172129c79755ea79e6102 (HEAD)
> Merge: da260ab23f26 48ec896efb0b
> Author: Warner Losh <imp@FreeBSD.org>
> Date:   Mon Aug 11 17:38:36 2025 -0600
> 
>     jemalloc: Merge from jemalloc 5.3.0 vendor branch
> 
> retains the perf problem as seen in the malloc microbenchmarks
> 
> and that going to one commit prior bring it back in line with 14.3
> 
> built like so from lib/libc:
> make -s -j 8 WITHOUT_TESTS=1 MALLOC_PRODUCTION=yes all install
> 
> Given that jemalloc prior to the import is a well known working state,
> I think it will be most prudent to revert the update for the time
> being and investigate it later.
> 
> Note both jemalloc itself and clang aside, there is the issue of
> slower binary startup in the first place (see the doexec.c parts in my
> e-mail).
> 
> Given the magnitude of the slowdowns, the above two are definitely EN
> material. Sorting out the startup thing should qualify depending on
> complexity of the fix, whatever it might be.
> 
I completely disagree.
NO_SHARED toolchain was the remnant from the pre-historic times where
migration to the ELF was performed, I believe.  It was a precaution to
make it possible to recover the system in case the rtld/dynamic libc
installation gone wrong.  In other words, non-shared toolchain is a
mistake on its own.

Next, the change of llvm components to dynamically link with the llvm
libs is how upstream does it.  Not to mention, that this way to use
clang+lld saves both disk space (not very important) and memory (much
more important).

The implied load on rtld is something that could be handled: there is
definitely no need to have such huge surface of exported symbols on both
libllvm and esp. libclang. Perhaps by default the internal libraries can
use protected symbols, normally C++ do not rely on interposing. But such
'fixes' must occur at upstream.

So far all the clang toolchain changes were aligning it with what the
llvm project does.


help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?aTYYpcYm8uOU1M_q>