Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 27 Apr 2024 18:36:38 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 277992] mpr and possible trim issues
Message-ID:  <bug-277992-227-gbR7SchWnG@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-277992-227@https.bugs.freebsd.org/bugzilla/>
References:  <bug-277992-227@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D277992

--- Comment #9 from mike@sentex.net ---
I decided to try the same tests on the exact same hardware but booting true=
nas
scale to see if the problem persists.  If I do a manual trim between zfs se=
nd |
zfs recv, the performance seems fairly consistent and there are no
crashes/resets of the drives in the pool on linux (6.6.20-production+truena=
s).

Not a linux person so hard to say if there are some quirks for these disks =
on
linux.=20

root@truenas[/var/log]# hdparm -I /dev/sda | grep -i tri
           *    Data Set Management TRIM supported (limit 8 blocks)
           *    Deterministic read data after TRIM
root@truenas[/var/log]#=20

If I dont do the manual TRIM between send|recv (ie zpool trim -w pool), I g=
et
the same pattern as when I do a manual trim -f /dev/da[x] on each disk one =
by
one on FreeBSD.  I get 3 full speed loops and after that, super slow until a
proper trim is done. On FreeBSD I do this to the raidz1 pool by doing a tri=
m -f
/dev/da[1-4] one by one and resilver.

So it does seem to point to TRIM via zfs (be that manual or autotrim) someh=
ow
broken with this drive on FreeBSD via the mpr driver or via the ATA driver.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-277992-227-gbR7SchWnG>