Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Jan 2020 17:33:58 +0000
From:      bugzilla-noreply@freebsd.org
To:        geom@FreeBSD.org
Subject:   [Bug 242747] geli: AMD Epyc+GELI not using Hardware AES
Message-ID:  <bug-242747-14739-vh13zLjRFb@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-242747-14739@https.bugs.freebsd.org/bugzilla/>
References:  <bug-242747-14739@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D242747

--- Comment #10 from Nick Evans <nevans@talkpoint.com> ---
(In reply to dewayne from comment #8)

So far results are the same with both boxes being on -CURRENT with NODEBUG
kernels so at least that's ruled out.


eli.batch=3D1 alone helps CPU the usage, but at the expense of throughput. =
At
least on the Epyc box. It goes from about 280MB/s per disk to 180MB/s.
Idle-ness went up to 60% but probably due to the drop in overall throughput=
.=20

The eli.threads=3D2 count makes a big difference on the Epyc box. Per disk
throughput went up to 330MB/s and the overall idle-ness went up to 92% runn=
ing
dd if=3D/dev/da#.eli of=3D/dev/null bs=3D1m one per disk. batch=3D1 even wi=
th threads=3D2
doesn't seem to help in this case.

I guess there's some kind of thrashing going on here when the default 32
threads per disk are created that affects the Epyc box more than the Xeon. =
I'll
run some tests are different thread numbers and report back. Maybe we can at
least come up with more sensible defaults.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-242747-14739-vh13zLjRFb>