Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 09 Mar 2024 00:03:44 +0000
From:      bugzilla-noreply@freebsd.org
To:        fs@FreeBSD.org
Subject:   [Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5
Message-ID:  <bug-277389-3630-tXwNW7qgY1@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-277389-3630@https.bugs.freebsd.org/bugzilla/>
References:  <bug-277389-3630@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D277389

Mark Millard <marklmi26-fbsd@yahoo.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |marklmi26-fbsd@yahoo.com

--- Comment #8 from Mark Millard <marklmi26-fbsd@yahoo.com> ---
I tried the basic test in the type of context that I happen to
have access to, for example: main [so: 15]. It is a rather
simple zfs context, really used for bectl, not other typical
zfs reasons. It did not show the problem. Still, for comparison
and contrast, I report some context details, first the iozone
output:

# iozone -i 0,1 -l 512 -r 4k -s 1g
        Iozone: Performance Test of File I/O
                Version $Revision: 3.506 $
                Compiled for 64 bit mode.
                Build: freebsd=20

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Colli=
ns
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebne=
r,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave
Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                     Vangel Bojaxhi, Ben England, Vikentsi Lapa,
                     Alexey Skidanov, Sudhir Kumar.

        Run began: Fri Mar  8 23:04:51 2024

        Record Size 4 kB
        File size set to 1048576 kB
        Command line used: iozone -i 0,1 -l 512 -r 4k -s 1g
        Output is in kBytes/sec
        Time Resolution =3D 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process =3D 512=20
        Max process =3D 512=20
        Throughput test with 512 processes
        Each process writes a 1048576 kByte file in 4 kByte records

        Children see throughput for 512 initial writers         =3D 2155051=
.28
kB/sec
        Parent sees throughput for 512 initial writers  =3D 1450918.13 kB/s=
ec
        Min throughput per process                      =3D    4138.72 kB/s=
ec=20
        Max throughput per process                      =3D    6173.17 kB/s=
ec
        Avg throughput per process                      =3D    4209.08 kB/s=
ec
        Min xfer                                        =3D  702788.00 kB

        Children see throughput for 512 rewriters       =3D 1160623.87 kB/s=
ec
        Parent sees throughput for 512 rewriters        =3D 1152920.83 kB/s=
ec
        Min throughput per process                      =3D    2260.53 kB/s=
ec=20
        Max throughput per process                      =3D    2282.09 kB/s=
ec
        Avg throughput per process                      =3D    2266.84 kB/s=
ec
        Min xfer                                        =3D 1039540.00 kB



iozone test complete.

# zpool status
  pool: zoptb
 state: ONLINE
  scan: scrub repaired 0B in 00:01:45 with 0 errors on Sun Jun 19 06:50:48 =
2022
config:

        NAME           STATE     READ WRITE CKSUM
        zoptb          ONLINE       0     0     0
          gpt/OptBzfs  ONLINE       0     0     0

errors: No known data errors

I'll note that I use:

vfs.zfs.per_txg_dirty_frees_percent=3D5

in /etc/sysctl.conf on the ZFS FreeBSD systems that I've
access to. A different system had an issue that I reported
and the person that had increased the default for this
recommended I set it back to this now-old default. That
worked and I set the same on all such systems. I've no
evidence of it being relevant here but report the
contextual oddity anyway.

I used:

# zfs list -ospace,compression,mountpoint
NAME                                         AVAIL   USED  USEDSNAP  USEDDS=
=20
USEDREFRESERV  USEDCHILD  COMPRESS        MOUNTPOINT
. . .
zoptb/poudriere/data/wrkdirs                  652G   360K        0B    360K=
=20=20=20=20
        0B         0B  off             /usr/local/poudriere/data/wrkdirs
. . .

for the compression-off storage for the iozone activity.

The system has 192 GiBytes of RAM, 32 hardware threads (16 cores).

# gpart show -p
. . .

=3D>        40  2930277088    nda2  GPT  (1.4T)
          40      532480  nda2p1  efi  (260M)
      532520        2008          - free -  (1.0M)
      534528  1073741824  nda2p2  freebsd-swap  (512G)
  1074276352  1845493760  nda2p3  freebsd-zfs  (880G)
  2919770112    10507016          - free -  (5.0G)

. . .

# swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/OptBswp364 536870912        0 536870912     0%

NOTE: There is no evidence that the swap space was ever
used to store anything during the test.

# uname -apKU
FreeBSD 7950X3D-ZFS 15.0-CURRENT FreeBSD 15.0-CURRENT #137
main-n268520-5e248c23d995-dirty: Sat Feb 24 15:46:10 PST 2024=20=20=20=20
root@7950X3D-ZFS:/usr/obj/BUILDs/main-amd64-nodbg-clang/usr/main-src/amd64.=
amd64/sys/GENERIC-NODBG
amd64 amd64 1500014 1500014

The build is a personal build, not an official FreeBSD build.
(I'd be surprised if the distinctions would somehow make a
difference for the type of test.)


Maybe having the mirror involved is important? --Or some other
difference with my context? Amount of RAM? . . .?

--=20
You are receiving this mail because:
You are on the CC list for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-277389-3630-tXwNW7qgY1>