Date: Wed, 4 Apr 2018 22:42:22 +0200 From: Peter <pmc@citylink.dinoex.sub.org> To: freebsd-stable@FreeBSD.ORG Subject: Re: kern.sched.quantum: Creepy, sadistic scheduler Message-ID: <pa3dbe$1md4$1@oper.dinoex.de> In-Reply-To: <6883cf2d-207e-21ae-8d55-c768f0b72a73@FreeBSD.org> References: <pa17m7$82t$1@oper.dinoex.de> <6883cf2d-207e-21ae-8d55-c768f0b72a73@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Andriy Gapon wrote: > Not everyone has a postgres server and a suitable database. > Could you please devise a test scenario that demonstrates the problem and that > anyone could run? > Alright, simple things first: I can reproduce the effect without postgres, with regular commands. I run this on my database file: # lz4 2058067.1 /dev/null And have this as throughput: pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 889 0 7.07M 42.3K PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 51298 root 87 0 16184K 7912K RUN 1:00 51.60% lz4 I start the piglet: $ while true; do :; done And, same effect: pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 10 0 82.0K 0 PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 1911 admin 98 0 7044K 2860K RUN 65:48 89.22% bash 51298 root 52 0 16184K 7880K RUN 0:05 0.59% lz4 It does *not* happen with plain "cat" instead of "lz4". What may or may not have an influence on it: the respective filesystem is block=8k, and is 100% resident in l2arc. What is also interesting: I started trying this with "tar" (no effect, behaves properly), then with "tar --lz4". In the latter case "tar" starts "lz4" as a sub-process, so we have three processes in the play - and in that case the effect happens, but to lesser extent: about 75 I/Os per second. So, it seems quite clear that this has something to do with the logic inside the scheduler.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?pa3dbe$1md4$1>