Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 31 Jan 2019 12:20:25 -0800
From:      Mark Millard <marklmi@yahoo.com>
To:        FreeBSD PowerPC ML <freebsd-ppc@freebsd.org>
Cc:        Justin Hibbits <chmeeedalf@gmail.com>
Subject:   Re: PoerMac G5 "4 core" (system total): some from-power-off booting observations of the modern VM_MAX_KERNEL_ADDRESS value mixed with usefdt=1
Message-ID:  <99ECB1C6-0A3A-446B-8699-9FCE00FD3E6F@yahoo.com>
In-Reply-To: <FA03DF6B-B8FC-42E5-A5B3-2891D88BDAD7@yahoo.com>
References:  <2153561F-FE12-4BA0-9856-F75110401AB6@yahoo.com> <FA03DF6B-B8FC-42E5-A5B3-2891D88BDAD7@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
[Adding sysctl -a output of some of differences for old vs. modern
VM_MAX_KERNEL_ADDRESS figures being in use. I tired to pick out
static figures rather than active unless the difference was
notably larger for the distinct VM_MAX_KERNEL_ADDRESS figures. Each
sysctl -a was shortly after booting.]

On 2019-Jan-30, at 17:18, Mark Millard <marklmi at yahoo.com> wrote:

> [Where boot -v output is different between booting to completion vs.
> hanging up: actual text.]
>=20
> On 2019-Jan-29, at 17:52, Mark Millard <marklmi at yahoo.com> wrote:
>=20
>> For the modern VM_MAX_KERNEL_ADDRESS value and also use of the =
usefdt=3D1 case:
>>=20
>> This usually hang during boot during the "Waking up CPU" message =
sequence.
>>=20
>> But not always. Powering off and retrying , sometimes just a few =
times, and
>> other times dozens of times, the system that I have access to does =
eventually
>> boot for the combination. So some sort of race condition or lack of =
stable
>> initialization?
>>=20
>> When it does boot, smp seems to be set up and working.
>>=20
>> Once booted, it is usually not very long until the fans are going =
wild,
>> other than an occasional, temporary lull.
>>=20
>>=20
>>=20
>> For for shutting down the following applies to both =
VM_MAX_KERNEL_ADDRESS
>> values when a usefdt=3D1 type of context is in use:
>>=20
>> When I've kept explicit track, I've not had any example of all of =
the:
>>=20
>> Waiting (max 60 seconds) for system thread `bufdaemon' to stop...
>> Waiting (max 60 seconds) for system thread `bufspacedaemon-1' to =
stop...
>> Waiting (max 60 seconds) for system thread `bufspacedaemon-0' to =
stop...
>> . . .
>>=20
>> getting to "done": instead one or more time out. Which and how many
>> vary.
>>=20
>> The fans tend to take off for both VM_MAX_KERNEL_ADDRESS values. The
>> buf*daemon timeouts happen even if the fans have not taken off.
>>=20
>=20
> With VM_MAX_KERNEL_ADDRESS reverted or a successful
> boot with the modern value:
>=20
> Adding CPU 0, hwref=3Dcd38, awake=3D1
> Waking up CPU 3 (dev=3Dc480)
> Adding CPU 3, hwref=3Dc480, awake=3D1
> Waking up CPU 2 (dev=3Dc768)
> Adding CPU 2, hwref=3Dc768, awake=3D1
> Waking up CPU 1 (dev=3Dca50)
> Adding CPU 1, hwref=3Dca50, awake=3D1
> SMP: AP CPU #3 launched
> SMP: AP CPU #2 launched
> SMP: AP CPU #1 launched
> Trying to mount root from ufs:/dev/ufs/FBSDG5L2rootfs [rw,noatime]...
>=20
>=20
> With the modern VM_MAX_KERNEL_ADDRESS value for a boot attempt
> that failed, an example (typed from a picture of the screen) is:
>=20
> Adding CPU 0, hwref=3Dcd38, awake=3D1
> Waking up CPU 3 (dev=3Dc480)
>=20
> Another is:
>=20
> Adding CPU 0, hwref=3Dcd38, awake=3D1
> Waking up CPU 3 (dev=3Dc480)
> Waking up CPU 2 (dev=3Dc768)
>=20
> (Both examples have no more output.)
>=20
> So CPUs 1..3 do not get "Adding CPU" messages. Also:
> I do not remember seeing all 3 "Waking up CPU" messages,
> just 1 or 2 of them.
>=20
> (Sometimes the "Trying to mount root from" message is in
> the mix as I remember.)
>=20
>=20
> One point of difference that is consistently observable for
> the old vs. modern VM_MAX_KERNEL_ADDRESS values is how many
> bufspacedaemon-* threads there are:
>=20
> old VM_MAX_KERNEL_ADDRESS value: 0..2
> new VM_MAX_KERNEL_ADDRESS value: 0..6
>=20
>=20
> I have had many boot attempts in a row boot
> for the modern VM_MAX_KERNEL_ADDRESS value,
> though not as many as the dozens of failures
> in a row. Highly variable with lots of
> testing.
>=20

Do all the below increases make sense for the 16 GiByte
RAM G5 example context? (Other G5's may have less RAM.)

-: old VM_MAX_KERNEL_ADDRESS figure in kernel build
+: modern VM_MAX_KERNEL_ADDRESS figure in kernel build
(The context is not using zfs, just ufs.)

-kern.maxvnodes: 188433
+kern.maxvnodes: 337606

-kern.ipc.maxpipekva: 119537663
+kern.ipc.maxpipekva: 267718656

-kern.ipc.maxmbufmem: 1530083328
+kern.ipc.maxmbufmem: 2741362688

-kern.ipc.nmbclusters: 186778
+kern.ipc.nmbclusters: 334640

-kern.ipc.nmbjumbop: 93388
+kern.ipc.nmbjumbop: 167319

-kern.ipc.nmbjumbo9: 27670
+kern.ipc.nmbjumbo9: 49576

-kern.ipc.nmbjumbo16: 15564
+kern.ipc.nmbjumbo16: 27886

-kern.ipc.nmbufs: 1195380
+kern.ipc.nmbufs: 2141700

-kern.minvnodes: 47108
+kern.minvnodes: 84401

-kern.nbuf: 47358
+kern.nbuf: 105243

-vm.max_kernel_address: 16140901072146268159
+vm.max_kernel_address: 16140901098855596031
(included for reference)

-vm.kmem_size: 3060166656
+vm.kmem_size: 5482725376

-vm.kmem_size_max: 3060164198
+vm.kmem_size_max: 13743895347

-vm.kmem_map_size: 44638208
+vm.kmem_map_size: 51691520

-vm.kmem_map_free: 3015528448
+vm.kmem_map_free: 5431033856

-vfs.ufs.dirhash_maxmem: 12115968
+vfs.ufs.dirhash_maxmem: 26935296

-vfs.wantfreevnodes: 47108
+vfs.wantfreevnodes: 84401

-vfs.maxbufspace: 775913472
+vfs.maxbufspace: 1724301312

-vfs.maxmallocbufspace: 38762905
+vfs.maxmallocbufspace: 86182297

-vfs.lobufspace: 736495195
+vfs.lobufspace: 1637463643

-vfs.hibufspace: 775258112
+vfs.hibufspace: 1723645952

-vfs.bufspacethresh: 755876653
+vfs.bufspacethresh: 1680554797

-vfs.lorunningspace: 8126464
+vfs.lorunningspace: 11206656

-vfs.hirunningspace: 12124160
+vfs.hirunningspace: 16777216

-vfs.lodirtybuffers: 5929
+vfs.lodirtybuffers: 13165

-vfs.hidirtybuffers: 11859
+vfs.hidirtybuffers: 26330

-vfs.dirtybufthresh: 10673
+vfs.dirtybufthresh: 23697

-vfs.numfreebuffers: 47358
+vfs.numfreebuffers: 105243

-vfs.nfsd.request_space_low: 63753556
+vfs.nfsd.request_space_low: 114223786

-vfs.nfsd.request_space_high: 95630336
+vfs.nfsd.request_space_high: 171335680

-net.inet.ip.maxfrags: 5836
+net.inet.ip.maxfrags: 10457

-net.inet.ip.maxfragpackets: 5893
+net.inet.ip.maxfragpackets: 10508

-net.inet.tcp.reass.maxsegments: 11703
+net.inet.tcp.reass.maxsegments: 20916

-net.inet.sctp.maxchunks: 23347
+net.inet.sctp.maxchunks: 41830

-net.inet6.ip6.maxfragpackets: 5836
+net.inet6.ip6.maxfragpackets: 10457

-net.inet6.ip6.maxfrags: 5836
+net.inet6.ip6.maxfrags: 10457

-net.inet6.ip6.maxfragbucketsize: 11
+net.inet6.ip6.maxfragbucketsize: 20

-debug.softdep.max_softdeps: 753732
+debug.softdep.max_softdeps: 1350424

-machdep.moea64_pte_valid: 148909
+machdep.moea64_pte_valid: 160636



=3D=3D=3D
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?99ECB1C6-0A3A-446B-8699-9FCE00FD3E6F>