Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 12 Dec 2018 08:07:46 +1100 (EST)
From:      Bruce Evans <brde@optusnet.com.au>
To:        John Baldwin <jhb@freebsd.org>
Cc:        Devin Teske <dteske@freebsd.org>, Conrad Meyer <cem@freebsd.org>,  src-committers@freebsd.org, svn-src-all@freebsd.org,  svn-src-head@freebsd.org
Subject:   Re: svn commit: r341803 - head/libexec/rc
Message-ID:  <20181212071210.L826@besplex.bde.org>
In-Reply-To: <dafbcc18-146f-2e4f-e1e9-346d7c05b096@FreeBSD.org>
References:  <201812110138.wBB1cp1p006660@repo.freebsd.org> <2a76b295-b2da-3015-c201-dbe0ec63ca5a@FreeBSD.org> <98481565-CDD7-4301-B86B-072D5B984AF7@FreeBSD.org> <dafbcc18-146f-2e4f-e1e9-346d7c05b096@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 11 Dec 2018, John Baldwin wrote:

> On 12/11/18 9:40 AM, Devin Teske wrote:
>> ...
>> Thank you for the background which was lost by the time I got to the phab.
>>
>> I can't help but ask though,...
>>
>> If it was noticed that read(2) processes the stream one byte at a time,
>> why not just optimize read(2)?
>>
>> I'm afraid of the prospect of having to hunt down every instance of while-read,
>> but if we can fix the underlying read(2) inefficiency then we make while-read OK.
>
> It's a system call.  A CPU emulator has to do a lot of work for a system call
> because it involves two mode switches (user -> kernel and back again).  You
> can't "fix" that as it's just a part of the CPU architecture.  There's a reason
> that stdio uses buffering by default, it's because system calls have overhead.
> The 'read' builtin in sh can't use buffering, so it is always going to be
> inefficient.

Syscalls have always been well known to be slow, but slowness is relative.
CPUs are thousands of times slower that in 1980, so systems should be able
to crunch through a few MB of data read 1 byte at a time fast enough that
no one notices the slowness, even when emulated.

But software bloat is now outrunning CPU speed increases.  Some bandwidths
for reading 1 byte at a time run today on the same 2GHz CPU i386 UP hardware

linux-2.1.128 kernel built in 1998: 2500k/sec
linux-2.4.0t8 kernel built in 2000: 1720k/sec
linux-2.6.10  kernel built in 2004: 1540k/sec
FreeBSD-4     kernel built in 2007:  680k/sec
FreeBSD-~5.2  kernel built in 2018:  700k/sec
FreeBSD-11    kernel built in 2018:  720k/sec (SMP kernel)
FreeBSD-pre12 kernel built in 2018:  540k/sec (SMP kernel)
FreeBSD-13    kernel built in 2018:  170k/sec (SMP kernel)

This is with all recent security-related pessimizations like ibrs turned off,
except the main one for i386 is using separate address spaces for the kernel
and userland and this cannot be turned off.  This is what gives most of the
recent slowdown factor of more than 3.  This slowdown factor is close to
3 for large block sizes, since read() is so slow that it takes about the
same time for all block sizes below a few hundred bytes.

Network bandwidth has similar slowdowns for small packets starting in
FreeBSD-4 (the old Linuxes have low enough syscall overhead for the NIC
to saturate at 640 kpps before the CPU or 1 Gbps ethernet saturates).
amd64 doesn't have the 3-fold slowdown from separate address spaces.

Optimizing syscalls is not very important, but it is convenient for
applications with bad buffering to not be very slow, and it is annoying
to lose benchmarks by a factor of 2 in 1998 and 10 now.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20181212071210.L826>