Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jun 2020 22:45:11 +0200
From:      Daniel Ebdrup Jensen <debdrup@FreeBSD.org>
To:        freebsd-hackers@freebsd.org
Subject:   Re: Constant load of 1 on a recent 12-STABLE
Message-ID:  <20200603204511.6qmsub2gqc44jkjw@nerd-thinkpad.local>
In-Reply-To: <20200603202929.GA65032@lion.0xfce3.net>
References:  <20200603101607.GA80381@lion.0xfce3.net> <c18664e8-b4e3-1402-48ed-3a02dc36ce29@freebsd.org> <20200603202929.GA65032@lion.0xfce3.net>

next in thread | previous in thread | raw e-mail | index | archive | help

--asyt6ehtr6ovtesd
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Jun 03, 2020 at 10:29:29PM +0200, Gordon Bergling via freebsd-hacke=
rs wrote:
>Hi Allan,
>
>On Wed, Jun 03, 2020 at 03:13:47PM -0400, Allan Jude wrote:
>> On 2020-06-03 06:16, Gordon Bergling via freebsd-hackers wrote:
>> > since a while I am seeing a constant load of 1.00 on 12-STABLE,
>> > but all CPUs are shown as 100% idle in top.
>> >
>> > Has anyone an idea what could caused this?
>> >
>> > The load seems to be somewhat real, since the buildtimes on this
>> > machine for -CURRENT increased from about 2 hours to 3 hours.
>> >
>> > This a virtualized system running on Hyper-V, if that matters.
>> >
>> > Any hints are more then appreciated.
>> >
>> > Kind regards,
>> >
>> > Gordon
>>
>> Try running 'top -SP' and see if that shows a specific CPU being busy,
>> or a specific process using CPU time
>
>Below is the output of 'top -SP'. The only relevant process / thread that =
is
>relatively constant consumes CPU time seams to be 'zfskern'.
>
>--------------------------------------------------------------------------=
---
>last pid: 68549;  load averages:  1.10,  1.19,  1.16 up 0+14:59:45  22:17:=
24
>67 processes:  2 running, 64 sleeping, 1 waiting
>CPU 0:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
>CPU 1:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
>CPU 2:  0.0% user,  0.0% nice,  0.4% system,  0.0% interrupt, 99.6% idle
>CPU 3:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
>Mem: 108M Active, 4160M Inact, 33M Laundry, 3196M Wired, 444M Free
>ARC: 1858M Total, 855M MFU, 138M MRU, 96K Anon, 24M Header, 840M Other
>     461M Compressed, 1039M Uncompressed, 2.25:1 Ratio
>Swap: 2048M Total, 2048M Free
>
>  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COM=
MAND
>   11 root          4 155 ki31     0B    64K RUN      0  47.3H 386.10% idle
>    8 root         65  -8    -     0B  1040K t->zth   0 115:39  12.61% zfs=
kern
>--------------------------------------------------------------------------=
-----
>
>The only key performance indicator that is relatively high IMHO, for a
>non-busy system, are the context switches, that vmstat has reported.
>
>--------------------------------------------------------------------------=
-----
>procs  memory       page                    disks     faults         cpu
>r b w  avm   fre   flt  re  pi  po    fr   sr da0 da1   in    sy    cs us =
sy id
>0 0 0 514G  444M  7877   2   7   0  9595  171   0   0    0  4347 43322 17 =
 2 81
>0 0 0 514G  444M     1   0   0   0     0   44   0   0    0   121 40876  0 =
 0 100
>0 0 0 514G  444M     0   0   0   0     0   40   0   0    0   133 42520  0 =
 0 100
>0 0 0 514G  444M     0   0   0   0     0   40   0   0    0   120 43830  0 =
 0 100
>0 0 0 514G  444M     0   0   0   0     0   40   0   0    0   132 42917  0 =
 0 100
>--------------------------------------------------------------------------=
------
>
>Any other ideas what could generate that load?
>
>Best regards,
>
>Gordon
>_______________________________________________
>freebsd-hackers@freebsd.org mailing list
>https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
>To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"

I seem to recall bde@ (may he rest in peace) mentioning that the ULE schedu=
ler=20
had some weirdness around sometimes generating a higher load number (one of=
 my=20
systems would regularily idle at 0.60, but doesn't do it on 12.1 so I gave =
up=20
trying to debug it) for no apparent reason, and it maybe being linked to ho=
w=20
WCPU and CPU don't differ on the ULE scheduler?

Have you tried setting the kern.eventtimer.periodic sysctl to 1?

Yours,
Daniel Ebdrup Jensen

--asyt6ehtr6ovtesd
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQGTBAABCgB9FiEEDonNJPbg/JLIMoS6Ps5hSHzN87oFAl7YC9dfFIAAAAAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDBF
ODlDRDI0RjZFMEZDOTJDODMyODRCQTNFQ0U2MTQ4N0NDREYzQkEACgkQPs5hSHzN
87rvMwf9GMXpckUcgl4AXVXAjKvkntaJWWCNzHGtvP8hebxYleDuKJShL1f7LkNT
DTHZ+wleQXFKDTCdtWIBF6DzL6gC1nZo3c6t+ivlAANC8+yp7OM0Oz37MdzTHogP
SAn0zmxEuVzGS3mX8e36qwJn/kZvDdXgz/i7/eyZNiMChUnRlmpT9IyfGxFFbchd
lqskpkqAM5jWfR9rItCtbIZbfdIMEY3b3e3YfMhdkrmhoLlcMCFeM5O7/Dlx811Y
QddQZTheORWCjtuGcx+Cd11NMuKi9y6HC3m/Hl+9vrt9FIuAoRa2Unjusz9TFo12
JQTbzIlyTi32uMSwus8oKFxtok3cyg==
=GvYB
-----END PGP SIGNATURE-----

--asyt6ehtr6ovtesd--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20200603204511.6qmsub2gqc44jkjw>