Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 21 Mar 2024 08:57:44 -0400
From:      "Drew Gallatin" <gallatin@freebsd.org>
To:        "Konstantin Belousov" <kib@freebsd.org>, rrs <rrs@lakerest.net>
Cc:        "Mike Karels" <mike@karels.net>, tuexen <tuexen@freebsd.org>, "Nuno Teixeira" <eduardo@freebsd.org>, garyj@gmx.de, current@freebsd.org, net@freebsd.org, "Randall Stewart" <rrs@freebsd.org>
Subject:   Re: Request for Testing: TCP RACK
Message-ID:  <fc160c79-1884-4f68-8310-35e7ac0b9dd6@app.fastmail.com>
In-Reply-To: <Zft8odA0s49eLhvk@kib.kiev.ua>
References:  <6e795e9c-8de4-4e02-9a96-8fabfaa4e66f@app.fastmail.com> <CAFDf7UKDWSnhm%2BTwP=ZZ9dkk0jmAgjGKPLpkX-CKuw3yH233gQ@mail.gmail.com> <CAFDf7UJq9SCnU-QYmS3t6EknP369w2LR0dNkQAc-NaRLvwVfoQ@mail.gmail.com> <A3F1FC0C-C199-4565-8E07-B233ED9E7B2E@freebsd.org> <6047C8EF-B1B0-4286-93FA-AA38F8A18656@karels.net> <ZfiI7GcbTwSG8kkO@kib.kiev.ua> <8031cd99-ded8-4b06-93b3-11cc729a8b2c@app.fastmail.com> <ZfiY-xUUM3wrBEz_@kib.kiev.ua> <38c54399-6c96-44d8-a3a2-3cc1bfbe50c2@app.fastmail.com> <27d8144f-0658-46f6-b8f3-35eb60061644@lakerest.net> <Zft8odA0s49eLhvk@kib.kiev.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
--3b39556bddae44fcbfa6d30004956a6c
Content-Type: text/plain;charset=utf-8
Content-Transfer-Encoding: quoted-printable

The entire point is to *NOT* go through the overhead of scheduling somet=
hing asynchronously, but to take advantage of the fact that a user/kerne=
l transition is going to trash the cache anyway.

In the common case of a system which has less than the threshold  number=
 of connections , we access the tcp_hpts_softclock function pointer, mak=
e one function call, and access hpts_that_need_softclock, and then retur=
n.  So that's 2 variables and a function call.

I think it would be preferable to avoid that call, and to move the decla=
ration of tcp_hpts_softclock and hpts_that_need_softclock so that they a=
re in the same cacheline.  Then we'd be hitting just a single line in th=
e common case.  (I've made comments on the review to that effect).

Also, I wonder if the threshold could get higher by default, so that hpt=
s is never called in this context unless we're to the point where we're =
scheduling thousands of runs of the hpts thread (and taking all those cl=
ock interrupts).

Drew

On Wed, Mar 20, 2024, at 8:17 PM, Konstantin Belousov wrote:
> On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs wrote:
> > Ok I have created
> >=20
> > https://reviews.freebsd.org/D44420
> >=20
> >=20
> > To address the issue. I also attach a short version of the patch tha=
t Nuno
> > can try and validate
> >=20
> > it works. Drew you may want to try this and validate the optimizatio=
n does
> > kick in since I can
> >=20
> > only now test that it does not on my local box :)
> The patch still causes access to all cpu's cachelines on each userret.
> It would be much better to inc/check the threshold and only schedule t=
he
> call when exceeded.  Then the call can occur in some dedicated context,
> like per-CPU thread, instead of userret.
>=20
> >=20
> >=20
> > R
> >=20
> >=20
> >=20
> > On 3/18/24 3:42 PM, Drew Gallatin wrote:
> > > No.  The goal is to run on every return to userspace for every thr=
ead.
> > >=20
> > > Drew
> > >=20
> > > On Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote:
> > > > On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin wrote:
> > > > > I got the idea from
> > > > > https://people.mpi-sws.org/~druschel/publications/soft-timers-=
tocs.pdf
> > > > > The gist is that the TCP pacing stuff needs to run frequently,=
 and
> > > > > rather than run it out of a clock interrupt, its more efficien=
t to run
> > > > > it out of a system call context at just the point where we ret=
urn to
> > > > > userspace and the cache is trashed anyway. The current impleme=
ntation
> > > > > is fine for our workload, but probably not idea for a generic =
system.
> > > > > Especially one where something is banging on system calls.
> > > > >
> > > > > Ast's could be the right tool for this, but I'm super unfamili=
ar with
> > > > > them, and I can't find any docs on them.
> > > > >
> > > > > Would ast_register(0, ASTR_UNCOND, 0, func) be roughly equival=
ent to
> > > > > what's happening here?
> > > > This call would need some AST number added, and then it register=
s the
> > > > ast to run on next return to userspace, for the current thread.
> > > >=20
> > > > Is it enough?
> > > > >
> > > > > Drew
> > > >=20
> > > > >
> > > > > On Mon, Mar 18, 2024, at 2:33 PM, Konstantin Belousov wrote:
> > > > > > On Mon, Mar 18, 2024 at 07:26:10AM -0500, Mike Karels wrote:
> > > > > > > On 18 Mar 2024, at 7:04, tuexen@freebsd.org wrote:
> > > > > > >
> > > > > > > >> On 18. Mar 2024, at 12:42, Nuno Teixeira
> > > > <eduardo@freebsd.org> wrote:
> > > > > > > >>
> > > > > > > >> Hello all!
> > > > > > > >>
> > > > > > > >> It works just fine!
> > > > > > > >> System performance is OK.
> > > > > > > >> Using patch on main-n268841-b0aaf8beb126(-dirty).
> > > > > > > >>
> > > > > > > >> ---
> > > > > > > >> net.inet.tcp.functions_available:
> > > > > > > >> Stack                           D
> > > > Alias                            PCB count
> > > > > > > >> freebsd freebsd                          0
> > > > > > > >> rack                            *
> > > > rack                             38
> > > > > > > >> ---
> > > > > > > >>
> > > > > > > >> It would be so nice that we can have a sysctl tunnable =
for
> > > > this patch
> > > > > > > >> so we could do more tests without recompiling kernel.
> > > > > > > > Thanks for testing!
> > > > > > > >
> > > > > > > > @gallatin: can you come up with a patch that is acceptab=
le
> > > > for Netflix
> > > > > > > > and allows to mitigate the performance regression.
> > > > > > >
> > > > > > > Ideally, tcphpts could enable this automatically when it
> > > > starts to be
> > > > > > > used (enough?), but a sysctl could select auto/on/off.
> > > > > > There is already a well-known mechanism to request execution=
 of the
> > > > > > specific function on return to userspace, namely AST.  The d=
ifference
> > > > > > with the current hack is that the execution is requested for=
 one
> > > > callback
> > > > > > in the context of the specific thread.
> > > > > >
> > > > > > Still, it might be worth a try to use it; what is the reason=
 to
> > > > hit a thread
> > > > > > that does not do networking, with TCP processing?
> > > > > >
> > > > > > >
> > > > > > > Mike
> > > > > > >
> > > > > > > > Best regards
> > > > > > > > Michael
> > > > > > > >>
> > > > > > > >> Thanks all!
> > > > > > > >> Really happy here :)
> > > > > > > >>
> > > > > > > >> Cheers,
> > > > > > > >>
> > > > > > > >> Nuno Teixeira <eduardo@freebsd.org> escreveu (domingo,
> > > > 17/03/2024 =C3=A0(s) 20:26):
> > > > > > > >>>
> > > > > > > >>> Hello,
> > > > > > > >>>
> > > > > > > >>>> I don't have the full context, but it seems like the
> > > > complaint is a performance regression in bonnie++ and perhaps ot=
her
> > > > things when tcp_hpts is loaded, even when it is not used.  Is th=
at
> > > > correct?
> > > > > > > >>>>
> > > > > > > >>>> If so, I suspect its because we drive the
> > > > tcp_hpts_softclock() routine from userret(), in order to avoid t=
ons
> > > > of timer interrupts and context switches.  To test this theory, =
 you
> > > > could apply a patch like:
> > > > > > > >>>
> > > > > > > >>> It's affecting overall system performance, bonnie was =
just
> > > > a way to
> > > > > > > >>> get some numbers to compare.
> > > > > > > >>>
> > > > > > > >>> Tomorrow I will test patch.
> > > > > > > >>>
> > > > > > > >>> Thanks!
> > > > > > > >>>
> > > > > > > >>> --
> > > > > > > >>> Nuno Teixeira
> > > > > > > >>> FreeBSD Committer (ports)
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> --
> > > > > > > >> Nuno Teixeira
> > > > > > > >> FreeBSD Committer (ports)
> > > > > > >
> > > > > >
> > > >=20
> > >=20
>=20
> > diff --git a/sys/netinet/tcp_hpts.c b/sys/netinet/tcp_hpts.c
> > index 8c4d2d41a3eb..eadbee19f69c 100644
> > --- a/sys/netinet/tcp_hpts.c
> > +++ b/sys/netinet/tcp_hpts.c
> > @@ -216,6 +216,7 @@ struct tcp_hpts_entry {
> >  void *ie_cookie;
> >  uint16_t p_num; /* The hpts number one per cpu */
> >  uint16_t p_cpu; /* The hpts CPU */
> > + uint8_t hit_callout_thresh;
> >  /* There is extra space in here */
> >  /* Cache line 0x100 */
> >  struct callout co __aligned(CACHE_LINE_SIZE);
> > @@ -269,6 +270,11 @@ static struct hpts_domain_info {
> >  int cpu[MAXCPU];
> >  } hpts_domains[MAXMEMDOM];
> > =20
> > +counter_u64_t hpts_that_need_softclock;
> > +SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO, needsoftcloc=
k, CTLFLAG_RD,
> > +    &hpts_that_need_softclock,
> > +    "Number of hpts threads that need softclock");
> > +
> >  counter_u64_t hpts_hopelessly_behind;
> > =20
> >  SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO, hopeless, CT=
LFLAG_RD,
> > @@ -334,7 +340,7 @@ SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO, precisi=
on, CTLFLAG_RW,
> >      &tcp_hpts_precision, 120,
> >      "Value for PRE() precision of callout");
> >  SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO, cnt_thresh, CTLFLAG_RW,
> > -    &conn_cnt_thresh, 0,
> > +    &conn_cnt_thresh, DEFAULT_CONNECTION_THESHOLD,
> >      "How many connections (below) make us use the callout based mec=
hanism");
> >  SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO, logging, CTLFLAG_RW,
> >      &hpts_does_tp_logging, 0,
> > @@ -1548,6 +1554,9 @@ __tcp_run_hpts(void)
> >  struct tcp_hpts_entry *hpts;
> >  int ticks_ran;
> > =20
> > + if (counter_u64_fetch(hpts_that_need_softclock) =3D=3D 0)
> > + return;
> > +
> >  hpts =3D tcp_choose_hpts_to_run();
> > =20
> >  if (hpts->p_hpts_active) {
> > @@ -1683,6 +1692,13 @@ tcp_hpts_thread(void *ctx)
> >  ticks_ran =3D tcp_hptsi(hpts, 1);
> >  tv.tv_sec =3D 0;
> >  tv.tv_usec =3D hpts->p_hpts_sleep_time * HPTS_TICKS_PER_SLOT;
> > + if ((hpts->p_on_queue_cnt > conn_cnt_thresh) && (hpts->hit_callout=
_thresh =3D=3D 0)) {
> > + hpts->hit_callout_thresh =3D 1;
> > + counter_u64_add(hpts_that_need_softclock, 1);
> > + } else if ((hpts->p_on_queue_cnt <=3D conn_cnt_thresh) && (hpts->h=
it_callout_thresh =3D=3D 1)) {
> > + hpts->hit_callout_thresh =3D 0;
> > + counter_u64_add(hpts_that_need_softclock, -1);
> > + }
> >  if (hpts->p_on_queue_cnt >=3D conn_cnt_thresh) {
> >  if(hpts->p_direct_wake =3D=3D 0) {
> >  /*
> > @@ -1818,6 +1834,7 @@ tcp_hpts_mod_load(void)
> >  cpu_top =3D NULL;
> >  #endif
> >  tcp_pace.rp_num_hptss =3D ncpus;
> > + hpts_that_need_softclock =3D counter_u64_alloc(M_WAITOK);
> >  hpts_hopelessly_behind =3D counter_u64_alloc(M_WAITOK);
> >  hpts_loops =3D counter_u64_alloc(M_WAITOK);
> >  back_tosleep =3D counter_u64_alloc(M_WAITOK);
> > @@ -2042,6 +2059,7 @@ tcp_hpts_mod_unload(void)
> >  free(tcp_pace.grps, M_TCPHPTS);
> >  #endif
> > =20
> > + counter_u64_free(hpts_that_need_softclock);
> >  counter_u64_free(hpts_hopelessly_behind);
> >  counter_u64_free(hpts_loops);
> >  counter_u64_free(back_tosleep);
>=20
>=20

--3b39556bddae44fcbfa6d30004956a6c
Content-Type: text/html;charset=utf-8
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html><html><head><title></title><style type=3D"text/css">p.Mso=
Normal,p.MsoNoSpacing{margin:0}</style></head><body><div>The entire poin=
t is to *NOT* go through the overhead of scheduling something asynchrono=
usly, but to take advantage of the fact that a user/kernel transition is=
 going to trash the cache anyway.<br></div><div><br></div><div>In the co=
mmon case of a system which has less than the threshold&nbsp; number of =
connections , we access the tcp_hpts_softclock function pointer, make on=
e function call, and access hpts_that_need_softclock, and then return.&n=
bsp; So that's 2 variables and a function call.<br></div><div><br></div>=
<div>I think it would be preferable to avoid that call, and to move the =
declaration of tcp_hpts_softclock and hpts_that_need_softclock so that t=
hey are in the same cacheline.&nbsp; Then we'd be hitting just a single =
line in the common case.&nbsp; (I've made comments on the review to that=
 effect).<br></div><div><br></div><div>Also, I wonder if the threshold c=
ould get higher by default, so that hpts is never called in this context=
 unless we're to the point where we're scheduling thousands of runs of t=
he hpts thread (and taking all those clock interrupts).<br></div><div><b=
r></div><div>Drew<br></div><div><br></div><div>On Wed, Mar 20, 2024, at =
8:17 PM, Konstantin Belousov wrote:<br></div><blockquote type=3D"cite" i=
d=3D"qt" style=3D""><div>On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs w=
rote:<br></div><div>&gt; Ok I have created<br></div><div>&gt;&nbsp;<br><=
/div><div>&gt;&nbsp;<a href=3D"https://reviews.freebsd.org/D44420">https=
://reviews.freebsd.org/D44420</a><br></div><div>&gt;&nbsp;<br></div><div=
>&gt;&nbsp;<br></div><div>&gt; To address the issue. I also attach a sho=
rt version of the patch that Nuno<br></div><div>&gt; can try and validat=
e<br></div><div>&gt;&nbsp;<br></div><div>&gt; it works. Drew you may wan=
t to try this and validate the optimization does<br></div><div>&gt; kick=
 in since I can<br></div><div>&gt;&nbsp;<br></div><div>&gt; only now tes=
t that it does not on my local box :)<br></div><div>The patch still caus=
es access to all cpu's cachelines on each userret.<br></div><div>It woul=
d be much better to inc/check the threshold and only schedule the<br></d=
iv><div>call when exceeded.&nbsp; Then the call can occur in some dedica=
ted context,<br></div><div>like per-CPU thread, instead of userret.<br><=
/div><div><br></div><div>&gt;&nbsp;<br></div><div>&gt;&nbsp;<br></div><d=
iv>&gt; R<br></div><div>&gt;&nbsp;<br></div><div>&gt;&nbsp;<br></div><di=
v>&gt;&nbsp;<br></div><div>&gt; On 3/18/24 3:42 PM, Drew Gallatin wrote:=
<br></div><div>&gt; &gt; No.&nbsp; The goal is to run on every return to=
 userspace for every thread.<br></div><div>&gt; &gt;&nbsp;<br></div><div=
>&gt; &gt; Drew<br></div><div>&gt; &gt;&nbsp;<br></div><div>&gt; &gt; On=
 Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote:<br></div><div=
>&gt; &gt; &gt; On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin =
wrote:<br></div><div>&gt; &gt; &gt; &gt; I got the idea from<br></div><d=
iv>&gt; &gt; &gt; &gt;&nbsp;<a href=3D"https://people.mpi-sws.org/~drusc=
hel/publications/soft-timers-tocs.pdf">https://people.mpi-sws.org/~drusc=
hel/publications/soft-timers-tocs.pdf</a><br></div><div>&gt; &gt; &gt; &=
gt; The gist is that the TCP pacing stuff needs to run frequently, and<b=
r></div><div>&gt; &gt; &gt; &gt; rather than run it out of a clock inter=
rupt, its more efficient to run<br></div><div>&gt; &gt; &gt; &gt; it out=
 of a system call context at just the point where we return to<br></div>=
<div>&gt; &gt; &gt; &gt; userspace and the cache is trashed anyway. The =
current implementation<br></div><div>&gt; &gt; &gt; &gt; is fine for our=
 workload, but probably not idea for a generic system.<br></div><div>&gt=
; &gt; &gt; &gt; Especially one where something is banging on system cal=
ls.<br></div><div>&gt; &gt; &gt; &gt;<br></div><div>&gt; &gt; &gt; &gt; =
Ast's could be the right tool for this, but I'm super unfamiliar with<br=
></div><div>&gt; &gt; &gt; &gt; them, and I can't find any docs on them.=
<br></div><div>&gt; &gt; &gt; &gt;<br></div><div>&gt; &gt; &gt; &gt; Wou=
ld ast_register(0, ASTR_UNCOND, 0, func) be roughly equivalent to<br></d=
iv><div>&gt; &gt; &gt; &gt; what's happening here?<br></div><div>&gt; &g=
t; &gt; This call would need some AST number added, and then it register=
s the<br></div><div>&gt; &gt; &gt; ast to run on next return to userspac=
e, for the current thread.<br></div><div>&gt; &gt; &gt;&nbsp;<br></div><=
div>&gt; &gt; &gt; Is it enough?<br></div><div>&gt; &gt; &gt; &gt;<br></=
div><div>&gt; &gt; &gt; &gt; Drew<br></div><div>&gt; &gt; &gt;&nbsp;<br>=
</div><div>&gt; &gt; &gt; &gt;<br></div><div>&gt; &gt; &gt; &gt; On Mon,=
 Mar 18, 2024, at 2:33 PM, Konstantin Belousov wrote:<br></div><div>&gt;=
 &gt; &gt; &gt; &gt; On Mon, Mar 18, 2024 at 07:26:10AM -0500, Mike Kare=
ls wrote:<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; On 18 Mar 2024, at=
 7:04,&nbsp;<a href=3D"mailto:tuexen@freebsd.org">tuexen@freebsd.org</a>=
 wrote:<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt;<br></div><div>&gt; &=
gt; &gt; &gt; &gt; &gt; &gt;&gt; On 18. Mar 2024, at 12:42, Nuno Teixeir=
a<br></div><div>&gt; &gt; &gt; &lt;<a href=3D"mailto:eduardo@freebsd.org=
">eduardo@freebsd.org</a>&gt; wrote:<br></div><div>&gt; &gt; &gt; &gt; &=
gt; &gt; &gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; H=
ello all!<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;<br></div>=
<div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; It works just fine!<br></div=
><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; System performance is OK.<b=
r></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; Using patch on main-=
n268841-b0aaf8beb126(-dirty).<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt=
; &gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; ---<br><=
/div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; net.inet.tcp.functions_=
available:<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; Stack&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; D<br></div><div>&gt; &gt; &gt; Alias&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; PCB cou=
nt<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; freebsd freebsd&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; 0<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; rack&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; *<br></div><div>&gt; &gt; &gt; rack&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
38<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; ---<br></div><di=
v>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;<br></div><div>&gt; &gt; &gt; &g=
t; &gt; &gt; &gt;&gt; It would be so nice that we can have a sysctl tunn=
able for<br></div><div>&gt; &gt; &gt; this patch<br></div><div>&gt; &gt;=
 &gt; &gt; &gt; &gt; &gt;&gt; so we could do more tests without recompil=
ing kernel.<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt; Thanks for =
testing!<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;<br></div><div>=
&gt; &gt; &gt; &gt; &gt; &gt; &gt; @gallatin: can you come up with a pat=
ch that is acceptable<br></div><div>&gt; &gt; &gt; for Netflix<br></div>=
<div>&gt; &gt; &gt; &gt; &gt; &gt; &gt; and allows to mitigate the perfo=
rmance regression.<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt;<br></div>=
<div>&gt; &gt; &gt; &gt; &gt; &gt; Ideally, tcphpts could enable this au=
tomatically when it<br></div><div>&gt; &gt; &gt; starts to be<br></div><=
div>&gt; &gt; &gt; &gt; &gt; &gt; used (enough?), but a sysctl could sel=
ect auto/on/off.<br></div><div>&gt; &gt; &gt; &gt; &gt; There is already=
 a well-known mechanism to request execution of the<br></div><div>&gt; &=
gt; &gt; &gt; &gt; specific function on return to userspace, namely AST.=
&nbsp; The difference<br></div><div>&gt; &gt; &gt; &gt; &gt; with the cu=
rrent hack is that the execution is requested for one<br></div><div>&gt;=
 &gt; &gt; callback<br></div><div>&gt; &gt; &gt; &gt; &gt; in the contex=
t of the specific thread.<br></div><div>&gt; &gt; &gt; &gt; &gt;<br></di=
v><div>&gt; &gt; &gt; &gt; &gt; Still, it might be worth a try to use it=
; what is the reason to<br></div><div>&gt; &gt; &gt; hit a thread<br></d=
iv><div>&gt; &gt; &gt; &gt; &gt; that does not do networking, with TCP p=
rocessing?<br></div><div>&gt; &gt; &gt; &gt; &gt;<br></div><div>&gt; &gt=
; &gt; &gt; &gt; &gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; Mike<b=
r></div><div>&gt; &gt; &gt; &gt; &gt; &gt;<br></div><div>&gt; &gt; &gt; =
&gt; &gt; &gt; &gt; Best regards<br></div><div>&gt; &gt; &gt; &gt; &gt; =
&gt; &gt; Michael<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;<b=
r></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; Thanks all!<br></div=
><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; Really happy here :)<br></d=
iv><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;<br></div><div>&gt; &gt; &=
gt; &gt; &gt; &gt; &gt;&gt; Cheers,<br></div><div>&gt; &gt; &gt; &gt; &g=
t; &gt; &gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; Nu=
no Teixeira &lt;<a href=3D"mailto:eduardo@freebsd.org">eduardo@freebsd.o=
rg</a>&gt; escreveu (domingo,<br></div><div>&gt; &gt; &gt; 17/03/2024 =C3=
=A0(s) 20:26):<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt;<=
br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt; Hello,<br></div=
><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt;<br></div><div>&gt; &gt;=
 &gt; &gt; &gt; &gt; &gt;&gt;&gt;&gt; I don't have the full context, but=
 it seems like the<br></div><div>&gt; &gt; &gt; complaint is a performan=
ce regression in bonnie++ and perhaps other<br></div><div>&gt; &gt; &gt;=
 things when tcp_hpts is loaded, even when it is not used.&nbsp; Is that=
<br></div><div>&gt; &gt; &gt; correct?<br></div><div>&gt; &gt; &gt; &gt;=
 &gt; &gt; &gt;&gt;&gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; =
&gt;&gt;&gt;&gt; If so, I suspect its because we drive the<br></div><div=
>&gt; &gt; &gt; tcp_hpts_softclock() routine from userret(), in order to=
 avoid tons<br></div><div>&gt; &gt; &gt; of timer interrupts and context=
 switches.&nbsp; To test this theory,&nbsp; you<br></div><div>&gt; &gt; =
&gt; could apply a patch like:<br></div><div>&gt; &gt; &gt; &gt; &gt; &g=
t; &gt;&gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt;=
 It's affecting overall system performance, bonnie was just<br></div><di=
v>&gt; &gt; &gt; a way to<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &g=
t;&gt;&gt; get some numbers to compare.<br></div><div>&gt; &gt; &gt; &gt=
; &gt; &gt; &gt;&gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt=
;&gt;&gt; Tomorrow I will test patch.<br></div><div>&gt; &gt; &gt; &gt; =
&gt; &gt; &gt;&gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&=
gt;&gt; Thanks!<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt;=
<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt; --<br></div><d=
iv>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt; Nuno Teixeira<br></div><di=
v>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;&gt; FreeBSD Committer (ports)<b=
r></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt;<br></div><div>&gt; &=
gt; &gt; &gt; &gt; &gt; &gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; =
&gt; &gt;&gt;<br></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; --<br=
></div><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; Nuno Teixeira<br></di=
v><div>&gt; &gt; &gt; &gt; &gt; &gt; &gt;&gt; FreeBSD Committer (ports)<=
br></div><div>&gt; &gt; &gt; &gt; &gt; &gt;<br></div><div>&gt; &gt; &gt;=
 &gt; &gt;<br></div><div>&gt; &gt; &gt;&nbsp;<br></div><div>&gt; &gt;&nb=
sp;<br></div><div><br></div><div>&gt; diff --git a/sys/netinet/tcp_hpts.=
c b/sys/netinet/tcp_hpts.c<br></div><div>&gt; index 8c4d2d41a3eb..eadbee=
19f69c 100644<br></div><div>&gt; --- a/sys/netinet/tcp_hpts.c<br></div><=
div>&gt; +++ b/sys/netinet/tcp_hpts.c<br></div><div>&gt; @@ -216,6 +216,=
7 @@ struct tcp_hpts_entry {<br></div><div>&gt;&nbsp; 	void *ie_cookie;<=
br></div><div>&gt;&nbsp; 	uint16_t p_num;		/* The hpts number one per cp=
u */<br></div><div>&gt;&nbsp; 	uint16_t p_cpu;		/* The hpts CPU */<br></=
div><div>&gt; +	uint8_t hit_callout_thresh;<br></div><div>&gt;&nbsp; 	/*=
 There is extra space in here */<br></div><div>&gt;&nbsp; 	/* Cache line=
 0x100 */<br></div><div>&gt;&nbsp; 	struct callout co __aligned(CACHE_LI=
NE_SIZE);<br></div><div>&gt; @@ -269,6 +270,11 @@ static struct hpts_dom=
ain_info {<br></div><div>&gt;&nbsp; 	int cpu[MAXCPU];<br></div><div>&gt;=
&nbsp; } hpts_domains[MAXMEMDOM];<br></div><div>&gt;&nbsp;&nbsp;<br></di=
v><div>&gt; +counter_u64_t hpts_that_need_softclock;<br></div><div>&gt; =
+SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO, needsoftclock, C=
TLFLAG_RD,<br></div><div>&gt; +&nbsp;&nbsp;&nbsp; &amp;hpts_that_need_so=
ftclock,<br></div><div>&gt; +&nbsp;&nbsp;&nbsp; "Number of hpts threads =
that need softclock");<br></div><div>&gt; +<br></div><div>&gt;&nbsp; cou=
nter_u64_t hpts_hopelessly_behind;<br></div><div>&gt;&nbsp;&nbsp;<br></d=
iv><div>&gt;&nbsp; SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO=
, hopeless, CTLFLAG_RD,<br></div><div>&gt; @@ -334,7 +340,7 @@ SYSCTL_IN=
T(_net_inet_tcp_hpts, OID_AUTO, precision, CTLFLAG_RW,<br></div><div>&gt=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &amp;tcp_hpts_precision, 120,<br></div><=
div>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "Value for PRE() precision of cal=
lout");<br></div><div>&gt;&nbsp; SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO=
, cnt_thresh, CTLFLAG_RW,<br></div><div>&gt; -&nbsp;&nbsp;&nbsp; &amp;co=
nn_cnt_thresh, 0,<br></div><div>&gt; +&nbsp;&nbsp;&nbsp; &amp;conn_cnt_t=
hresh, DEFAULT_CONNECTION_THESHOLD,<br></div><div>&gt;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp; "How many connections (below) make us use the callout based=
 mechanism");<br></div><div>&gt;&nbsp; SYSCTL_INT(_net_inet_tcp_hpts, OI=
D_AUTO, logging, CTLFLAG_RW,<br></div><div>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; &amp;hpts_does_tp_logging, 0,<br></div><div>&gt; @@ -1548,6 +1554,=
9 @@ __tcp_run_hpts(void)<br></div><div>&gt;&nbsp; 	struct tcp_hpts_entr=
y *hpts;<br></div><div>&gt;&nbsp; 	int ticks_ran;<br></div><div>&gt;&nbs=
p;&nbsp;<br></div><div>&gt; +	if (counter_u64_fetch(hpts_that_need_softc=
lock) =3D=3D 0)<br></div><div>&gt; +		return;<br></div><div>&gt; +<br></=
div><div>&gt;&nbsp; 	hpts =3D tcp_choose_hpts_to_run();<br></div><div>&g=
t;&nbsp;&nbsp;<br></div><div>&gt;&nbsp; 	if (hpts-&gt;p_hpts_active) {<b=
r></div><div>&gt; @@ -1683,6 +1692,13 @@ tcp_hpts_thread(void *ctx)<br><=
/div><div>&gt;&nbsp; 	ticks_ran =3D tcp_hptsi(hpts, 1);<br></div><div>&g=
t;&nbsp; 	tv.tv_sec =3D 0;<br></div><div>&gt;&nbsp; 	tv.tv_usec =3D hpts=
-&gt;p_hpts_sleep_time * HPTS_TICKS_PER_SLOT;<br></div><div>&gt; +	if ((=
hpts-&gt;p_on_queue_cnt &gt; conn_cnt_thresh) &amp;&amp; (hpts-&gt;hit_c=
allout_thresh =3D=3D 0)) {<br></div><div>&gt; +		hpts-&gt;hit_callout_th=
resh =3D 1;<br></div><div>&gt; +		counter_u64_add(hpts_that_need_softclo=
ck, 1);<br></div><div>&gt; +	} else if ((hpts-&gt;p_on_queue_cnt &lt;=3D=
 conn_cnt_thresh) &amp;&amp; (hpts-&gt;hit_callout_thresh =3D=3D 1)) {<b=
r></div><div>&gt; +		hpts-&gt;hit_callout_thresh =3D 0;<br></div><div>&g=
t; +		counter_u64_add(hpts_that_need_softclock, -1);<br></div><div>&gt; =
+	}<br></div><div>&gt;&nbsp; 	if (hpts-&gt;p_on_queue_cnt &gt;=3D conn_c=
nt_thresh) {<br></div><div>&gt;&nbsp; 		if(hpts-&gt;p_direct_wake =3D=3D=
 0) {<br></div><div>&gt;&nbsp; 			/*<br></div><div>&gt; @@ -1818,6 +1834=
,7 @@ tcp_hpts_mod_load(void)<br></div><div>&gt;&nbsp; 	cpu_top =3D NULL=
;<br></div><div>&gt;&nbsp; #endif<br></div><div>&gt;&nbsp; 	tcp_pace.rp_=
num_hptss =3D ncpus;<br></div><div>&gt; +	hpts_that_need_softclock =3D c=
ounter_u64_alloc(M_WAITOK);<br></div><div>&gt;&nbsp; 	hpts_hopelessly_be=
hind =3D counter_u64_alloc(M_WAITOK);<br></div><div>&gt;&nbsp; 	hpts_loo=
ps =3D counter_u64_alloc(M_WAITOK);<br></div><div>&gt;&nbsp; 	back_tosle=
ep =3D counter_u64_alloc(M_WAITOK);<br></div><div>&gt; @@ -2042,6 +2059,=
7 @@ tcp_hpts_mod_unload(void)<br></div><div>&gt;&nbsp; 	free(tcp_pace.g=
rps, M_TCPHPTS);<br></div><div>&gt;&nbsp; #endif<br></div><div>&gt;&nbsp=
;&nbsp;<br></div><div>&gt; +	counter_u64_free(hpts_that_need_softclock);=
<br></div><div>&gt;&nbsp; 	counter_u64_free(hpts_hopelessly_behind);<br>=
</div><div>&gt;&nbsp; 	counter_u64_free(hpts_loops);<br></div><div>&gt;&=
nbsp; 	counter_u64_free(back_tosleep);<br></div><div><br></div><div><br>=
</div></blockquote><div><br></div></body></html>
--3b39556bddae44fcbfa6d30004956a6c--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?fc160c79-1884-4f68-8310-35e7ac0b9dd6>