Date: Thu, 21 Mar 2024 08:57:44 -0400 From: "Drew Gallatin" <gallatin@freebsd.org> To: "Konstantin Belousov" <kib@freebsd.org>, rrs <rrs@lakerest.net> Cc: "Mike Karels" <mike@karels.net>, tuexen <tuexen@freebsd.org>, "Nuno Teixeira" <eduardo@freebsd.org>, garyj@gmx.de, current@freebsd.org, net@freebsd.org, "Randall Stewart" <rrs@freebsd.org> Subject: Re: Request for Testing: TCP RACK Message-ID: <fc160c79-1884-4f68-8310-35e7ac0b9dd6@app.fastmail.com> In-Reply-To: <Zft8odA0s49eLhvk@kib.kiev.ua> References: <6e795e9c-8de4-4e02-9a96-8fabfaa4e66f@app.fastmail.com> <CAFDf7UKDWSnhm%2BTwP=ZZ9dkk0jmAgjGKPLpkX-CKuw3yH233gQ@mail.gmail.com> <CAFDf7UJq9SCnU-QYmS3t6EknP369w2LR0dNkQAc-NaRLvwVfoQ@mail.gmail.com> <A3F1FC0C-C199-4565-8E07-B233ED9E7B2E@freebsd.org> <6047C8EF-B1B0-4286-93FA-AA38F8A18656@karels.net> <ZfiI7GcbTwSG8kkO@kib.kiev.ua> <8031cd99-ded8-4b06-93b3-11cc729a8b2c@app.fastmail.com> <ZfiY-xUUM3wrBEz_@kib.kiev.ua> <38c54399-6c96-44d8-a3a2-3cc1bfbe50c2@app.fastmail.com> <27d8144f-0658-46f6-b8f3-35eb60061644@lakerest.net> <Zft8odA0s49eLhvk@kib.kiev.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
--3b39556bddae44fcbfa6d30004956a6c Content-Type: text/plain;charset=utf-8 Content-Transfer-Encoding: quoted-printable The entire point is to *NOT* go through the overhead of scheduling somet= hing asynchronously, but to take advantage of the fact that a user/kerne= l transition is going to trash the cache anyway. In the common case of a system which has less than the threshold number= of connections , we access the tcp_hpts_softclock function pointer, mak= e one function call, and access hpts_that_need_softclock, and then retur= n. So that's 2 variables and a function call. I think it would be preferable to avoid that call, and to move the decla= ration of tcp_hpts_softclock and hpts_that_need_softclock so that they a= re in the same cacheline. Then we'd be hitting just a single line in th= e common case. (I've made comments on the review to that effect). Also, I wonder if the threshold could get higher by default, so that hpt= s is never called in this context unless we're to the point where we're = scheduling thousands of runs of the hpts thread (and taking all those cl= ock interrupts). Drew On Wed, Mar 20, 2024, at 8:17 PM, Konstantin Belousov wrote: > On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs wrote: > > Ok I have created > >=20 > > https://reviews.freebsd.org/D44420 > >=20 > >=20 > > To address the issue. I also attach a short version of the patch tha= t Nuno > > can try and validate > >=20 > > it works. Drew you may want to try this and validate the optimizatio= n does > > kick in since I can > >=20 > > only now test that it does not on my local box :) > The patch still causes access to all cpu's cachelines on each userret. > It would be much better to inc/check the threshold and only schedule t= he > call when exceeded. Then the call can occur in some dedicated context, > like per-CPU thread, instead of userret. >=20 > >=20 > >=20 > > R > >=20 > >=20 > >=20 > > On 3/18/24 3:42 PM, Drew Gallatin wrote: > > > No. The goal is to run on every return to userspace for every thr= ead. > > >=20 > > > Drew > > >=20 > > > On Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote: > > > > On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin wrote: > > > > > I got the idea from > > > > > https://people.mpi-sws.org/~druschel/publications/soft-timers-= tocs.pdf > > > > > The gist is that the TCP pacing stuff needs to run frequently,= and > > > > > rather than run it out of a clock interrupt, its more efficien= t to run > > > > > it out of a system call context at just the point where we ret= urn to > > > > > userspace and the cache is trashed anyway. The current impleme= ntation > > > > > is fine for our workload, but probably not idea for a generic = system. > > > > > Especially one where something is banging on system calls. > > > > > > > > > > Ast's could be the right tool for this, but I'm super unfamili= ar with > > > > > them, and I can't find any docs on them. > > > > > > > > > > Would ast_register(0, ASTR_UNCOND, 0, func) be roughly equival= ent to > > > > > what's happening here? > > > > This call would need some AST number added, and then it register= s the > > > > ast to run on next return to userspace, for the current thread. > > > >=20 > > > > Is it enough? > > > > > > > > > > Drew > > > >=20 > > > > > > > > > > On Mon, Mar 18, 2024, at 2:33 PM, Konstantin Belousov wrote: > > > > > > On Mon, Mar 18, 2024 at 07:26:10AM -0500, Mike Karels wrote: > > > > > > > On 18 Mar 2024, at 7:04, tuexen@freebsd.org wrote: > > > > > > > > > > > > > > >> On 18. Mar 2024, at 12:42, Nuno Teixeira > > > > <eduardo@freebsd.org> wrote: > > > > > > > >> > > > > > > > >> Hello all! > > > > > > > >> > > > > > > > >> It works just fine! > > > > > > > >> System performance is OK. > > > > > > > >> Using patch on main-n268841-b0aaf8beb126(-dirty). > > > > > > > >> > > > > > > > >> --- > > > > > > > >> net.inet.tcp.functions_available: > > > > > > > >> Stack D > > > > Alias PCB count > > > > > > > >> freebsd freebsd 0 > > > > > > > >> rack * > > > > rack 38 > > > > > > > >> --- > > > > > > > >> > > > > > > > >> It would be so nice that we can have a sysctl tunnable = for > > > > this patch > > > > > > > >> so we could do more tests without recompiling kernel. > > > > > > > > Thanks for testing! > > > > > > > > > > > > > > > > @gallatin: can you come up with a patch that is acceptab= le > > > > for Netflix > > > > > > > > and allows to mitigate the performance regression. > > > > > > > > > > > > > > Ideally, tcphpts could enable this automatically when it > > > > starts to be > > > > > > > used (enough?), but a sysctl could select auto/on/off. > > > > > > There is already a well-known mechanism to request execution= of the > > > > > > specific function on return to userspace, namely AST. The d= ifference > > > > > > with the current hack is that the execution is requested for= one > > > > callback > > > > > > in the context of the specific thread. > > > > > > > > > > > > Still, it might be worth a try to use it; what is the reason= to > > > > hit a thread > > > > > > that does not do networking, with TCP processing? > > > > > > > > > > > > > > > > > > > > Mike > > > > > > > > > > > > > > > Best regards > > > > > > > > Michael > > > > > > > >> > > > > > > > >> Thanks all! > > > > > > > >> Really happy here :) > > > > > > > >> > > > > > > > >> Cheers, > > > > > > > >> > > > > > > > >> Nuno Teixeira <eduardo@freebsd.org> escreveu (domingo, > > > > 17/03/2024 =C3=A0(s) 20:26): > > > > > > > >>> > > > > > > > >>> Hello, > > > > > > > >>> > > > > > > > >>>> I don't have the full context, but it seems like the > > > > complaint is a performance regression in bonnie++ and perhaps ot= her > > > > things when tcp_hpts is loaded, even when it is not used. Is th= at > > > > correct? > > > > > > > >>>> > > > > > > > >>>> If so, I suspect its because we drive the > > > > tcp_hpts_softclock() routine from userret(), in order to avoid t= ons > > > > of timer interrupts and context switches. To test this theory, = you > > > > could apply a patch like: > > > > > > > >>> > > > > > > > >>> It's affecting overall system performance, bonnie was = just > > > > a way to > > > > > > > >>> get some numbers to compare. > > > > > > > >>> > > > > > > > >>> Tomorrow I will test patch. > > > > > > > >>> > > > > > > > >>> Thanks! > > > > > > > >>> > > > > > > > >>> -- > > > > > > > >>> Nuno Teixeira > > > > > > > >>> FreeBSD Committer (ports) > > > > > > > >> > > > > > > > >> > > > > > > > >> > > > > > > > >> -- > > > > > > > >> Nuno Teixeira > > > > > > > >> FreeBSD Committer (ports) > > > > > > > > > > > > > > > > >=20 > > >=20 >=20 > > diff --git a/sys/netinet/tcp_hpts.c b/sys/netinet/tcp_hpts.c > > index 8c4d2d41a3eb..eadbee19f69c 100644 > > --- a/sys/netinet/tcp_hpts.c > > +++ b/sys/netinet/tcp_hpts.c > > @@ -216,6 +216,7 @@ struct tcp_hpts_entry { > > void *ie_cookie; > > uint16_t p_num; /* The hpts number one per cpu */ > > uint16_t p_cpu; /* The hpts CPU */ > > + uint8_t hit_callout_thresh; > > /* There is extra space in here */ > > /* Cache line 0x100 */ > > struct callout co __aligned(CACHE_LINE_SIZE); > > @@ -269,6 +270,11 @@ static struct hpts_domain_info { > > int cpu[MAXCPU]; > > } hpts_domains[MAXMEMDOM]; > > =20 > > +counter_u64_t hpts_that_need_softclock; > > +SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO, needsoftcloc= k, CTLFLAG_RD, > > + &hpts_that_need_softclock, > > + "Number of hpts threads that need softclock"); > > + > > counter_u64_t hpts_hopelessly_behind; > > =20 > > SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO, hopeless, CT= LFLAG_RD, > > @@ -334,7 +340,7 @@ SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO, precisi= on, CTLFLAG_RW, > > &tcp_hpts_precision, 120, > > "Value for PRE() precision of callout"); > > SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO, cnt_thresh, CTLFLAG_RW, > > - &conn_cnt_thresh, 0, > > + &conn_cnt_thresh, DEFAULT_CONNECTION_THESHOLD, > > "How many connections (below) make us use the callout based mec= hanism"); > > SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO, logging, CTLFLAG_RW, > > &hpts_does_tp_logging, 0, > > @@ -1548,6 +1554,9 @@ __tcp_run_hpts(void) > > struct tcp_hpts_entry *hpts; > > int ticks_ran; > > =20 > > + if (counter_u64_fetch(hpts_that_need_softclock) =3D=3D 0) > > + return; > > + > > hpts =3D tcp_choose_hpts_to_run(); > > =20 > > if (hpts->p_hpts_active) { > > @@ -1683,6 +1692,13 @@ tcp_hpts_thread(void *ctx) > > ticks_ran =3D tcp_hptsi(hpts, 1); > > tv.tv_sec =3D 0; > > tv.tv_usec =3D hpts->p_hpts_sleep_time * HPTS_TICKS_PER_SLOT; > > + if ((hpts->p_on_queue_cnt > conn_cnt_thresh) && (hpts->hit_callout= _thresh =3D=3D 0)) { > > + hpts->hit_callout_thresh =3D 1; > > + counter_u64_add(hpts_that_need_softclock, 1); > > + } else if ((hpts->p_on_queue_cnt <=3D conn_cnt_thresh) && (hpts->h= it_callout_thresh =3D=3D 1)) { > > + hpts->hit_callout_thresh =3D 0; > > + counter_u64_add(hpts_that_need_softclock, -1); > > + } > > if (hpts->p_on_queue_cnt >=3D conn_cnt_thresh) { > > if(hpts->p_direct_wake =3D=3D 0) { > > /* > > @@ -1818,6 +1834,7 @@ tcp_hpts_mod_load(void) > > cpu_top =3D NULL; > > #endif > > tcp_pace.rp_num_hptss =3D ncpus; > > + hpts_that_need_softclock =3D counter_u64_alloc(M_WAITOK); > > hpts_hopelessly_behind =3D counter_u64_alloc(M_WAITOK); > > hpts_loops =3D counter_u64_alloc(M_WAITOK); > > back_tosleep =3D counter_u64_alloc(M_WAITOK); > > @@ -2042,6 +2059,7 @@ tcp_hpts_mod_unload(void) > > free(tcp_pace.grps, M_TCPHPTS); > > #endif > > =20 > > + counter_u64_free(hpts_that_need_softclock); > > counter_u64_free(hpts_hopelessly_behind); > > counter_u64_free(hpts_loops); > > counter_u64_free(back_tosleep); >=20 >=20 --3b39556bddae44fcbfa6d30004956a6c Content-Type: text/html;charset=utf-8 Content-Transfer-Encoding: quoted-printable <!DOCTYPE html><html><head><title></title><style type=3D"text/css">p.Mso= Normal,p.MsoNoSpacing{margin:0}</style></head><body><div>The entire poin= t is to *NOT* go through the overhead of scheduling something asynchrono= usly, but to take advantage of the fact that a user/kernel transition is= going to trash the cache anyway.<br></div><div><br></div><div>In the co= mmon case of a system which has less than the threshold number of = connections , we access the tcp_hpts_softclock function pointer, make on= e function call, and access hpts_that_need_softclock, and then return.&n= bsp; So that's 2 variables and a function call.<br></div><div><br></div>= <div>I think it would be preferable to avoid that call, and to move the = declaration of tcp_hpts_softclock and hpts_that_need_softclock so that t= hey are in the same cacheline. Then we'd be hitting just a single = line in the common case. (I've made comments on the review to that= effect).<br></div><div><br></div><div>Also, I wonder if the threshold c= ould get higher by default, so that hpts is never called in this context= unless we're to the point where we're scheduling thousands of runs of t= he hpts thread (and taking all those clock interrupts).<br></div><div><b= r></div><div>Drew<br></div><div><br></div><div>On Wed, Mar 20, 2024, at = 8:17 PM, Konstantin Belousov wrote:<br></div><blockquote type=3D"cite" i= d=3D"qt" style=3D""><div>On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs w= rote:<br></div><div>> Ok I have created<br></div><div>> <br><= /div><div>> <a href=3D"https://reviews.freebsd.org/D44420">https= ://reviews.freebsd.org/D44420</a><br></div><div>> <br></div><div= >> <br></div><div>> To address the issue. I also attach a sho= rt version of the patch that Nuno<br></div><div>> can try and validat= e<br></div><div>> <br></div><div>> it works. Drew you may wan= t to try this and validate the optimization does<br></div><div>> kick= in since I can<br></div><div>> <br></div><div>> only now tes= t that it does not on my local box :)<br></div><div>The patch still caus= es access to all cpu's cachelines on each userret.<br></div><div>It woul= d be much better to inc/check the threshold and only schedule the<br></d= iv><div>call when exceeded. Then the call can occur in some dedica= ted context,<br></div><div>like per-CPU thread, instead of userret.<br><= /div><div><br></div><div>> <br></div><div>> <br></div><d= iv>> R<br></div><div>> <br></div><div>> <br></div><di= v>> <br></div><div>> On 3/18/24 3:42 PM, Drew Gallatin wrote:= <br></div><div>> > No. The goal is to run on every return to= userspace for every thread.<br></div><div>> > <br></div><div= >> > Drew<br></div><div>> > <br></div><div>> > On= Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote:<br></div><div= >> > > On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin = wrote:<br></div><div>> > > > I got the idea from<br></div><d= iv>> > > > <a href=3D"https://people.mpi-sws.org/~drusc= hel/publications/soft-timers-tocs.pdf">https://people.mpi-sws.org/~drusc= hel/publications/soft-timers-tocs.pdf</a><br></div><div>> > > &= gt; The gist is that the TCP pacing stuff needs to run frequently, and<b= r></div><div>> > > > rather than run it out of a clock inter= rupt, its more efficient to run<br></div><div>> > > > it out= of a system call context at just the point where we return to<br></div>= <div>> > > > userspace and the cache is trashed anyway. The = current implementation<br></div><div>> > > > is fine for our= workload, but probably not idea for a generic system.<br></div><div>>= ; > > > Especially one where something is banging on system cal= ls.<br></div><div>> > > ><br></div><div>> > > > = Ast's could be the right tool for this, but I'm super unfamiliar with<br= ></div><div>> > > > them, and I can't find any docs on them.= <br></div><div>> > > ><br></div><div>> > > > Wou= ld ast_register(0, ASTR_UNCOND, 0, func) be roughly equivalent to<br></d= iv><div>> > > > what's happening here?<br></div><div>> &g= t; > This call would need some AST number added, and then it register= s the<br></div><div>> > > ast to run on next return to userspac= e, for the current thread.<br></div><div>> > > <br></div><= div>> > > Is it enough?<br></div><div>> > > ><br></= div><div>> > > > Drew<br></div><div>> > > <br>= </div><div>> > > ><br></div><div>> > > > On Mon,= Mar 18, 2024, at 2:33 PM, Konstantin Belousov wrote:<br></div><div>>= > > > > On Mon, Mar 18, 2024 at 07:26:10AM -0500, Mike Kare= ls wrote:<br></div><div>> > > > > > On 18 Mar 2024, at= 7:04, <a href=3D"mailto:tuexen@freebsd.org">tuexen@freebsd.org</a>= wrote:<br></div><div>> > > > > ><br></div><div>> &= gt; > > > > >> On 18. Mar 2024, at 12:42, Nuno Teixeir= a<br></div><div>> > > <<a href=3D"mailto:eduardo@freebsd.org= ">eduardo@freebsd.org</a>> wrote:<br></div><div>> > > > &= gt; > >><br></div><div>> > > > > > >> H= ello all!<br></div><div>> > > > > > >><br></div>= <div>> > > > > > >> It works just fine!<br></div= ><div>> > > > > > >> System performance is OK.<b= r></div><div>> > > > > > >> Using patch on main-= n268841-b0aaf8beb126(-dirty).<br></div><div>> > > > > >= ; >><br></div><div>> > > > > > >> ---<br><= /div><div>> > > > > > >> net.inet.tcp.functions_= available:<br></div><div>> > > > > > >> Stack&nb= sp; &nb= sp; &nb= sp; D<br></div><div>> > > Alias &n= bsp; &n= bsp; PCB cou= nt<br></div><div>> > > > > > >> freebsd freebsd&= nbsp; &= nbsp; &= nbsp; 0<br></div><div>> > > > > > >> rack &= nbsp; &= nbsp; &= nbsp; *<br></div><div>> > > rack &= nbsp; &= nbsp; = 38<br></div><div>> > > > > > >> ---<br></div><di= v>> > > > > > >><br></div><div>> > > &g= t; > > >> It would be so nice that we can have a sysctl tunn= able for<br></div><div>> > > this patch<br></div><div>> >= > > > > >> so we could do more tests without recompil= ing kernel.<br></div><div>> > > > > > > Thanks for = testing!<br></div><div>> > > > > > ><br></div><div>= > > > > > > > @gallatin: can you come up with a pat= ch that is acceptable<br></div><div>> > > for Netflix<br></div>= <div>> > > > > > > and allows to mitigate the perfo= rmance regression.<br></div><div>> > > > > ><br></div>= <div>> > > > > > Ideally, tcphpts could enable this au= tomatically when it<br></div><div>> > > starts to be<br></div><= div>> > > > > > used (enough?), but a sysctl could sel= ect auto/on/off.<br></div><div>> > > > > There is already= a well-known mechanism to request execution of the<br></div><div>> &= gt; > > > specific function on return to userspace, namely AST.= The difference<br></div><div>> > > > > with the cu= rrent hack is that the execution is requested for one<br></div><div>>= > > callback<br></div><div>> > > > > in the contex= t of the specific thread.<br></div><div>> > > > ><br></di= v><div>> > > > > Still, it might be worth a try to use it= ; what is the reason to<br></div><div>> > > hit a thread<br></d= iv><div>> > > > > that does not do networking, with TCP p= rocessing?<br></div><div>> > > > ><br></div><div>> >= ; > > > ><br></div><div>> > > > > > Mike<b= r></div><div>> > > > > ><br></div><div>> > > = > > > > Best regards<br></div><div>> > > > > = > > Michael<br></div><div>> > > > > > >><b= r></div><div>> > > > > > >> Thanks all!<br></div= ><div>> > > > > > >> Really happy here :)<br></d= iv><div>> > > > > > >><br></div><div>> > &= gt; > > > >> Cheers,<br></div><div>> > > > &g= t; > >><br></div><div>> > > > > > >> Nu= no Teixeira <<a href=3D"mailto:eduardo@freebsd.org">eduardo@freebsd.o= rg</a>> escreveu (domingo,<br></div><div>> > > 17/03/2024 =C3= =A0(s) 20:26):<br></div><div>> > > > > > >>><= br></div><div>> > > > > > >>> Hello,<br></div= ><div>> > > > > > >>><br></div><div>> >= > > > > >>>> I don't have the full context, but= it seems like the<br></div><div>> > > complaint is a performan= ce regression in bonnie++ and perhaps other<br></div><div>> > >= things when tcp_hpts is loaded, even when it is not used. Is that= <br></div><div>> > > correct?<br></div><div>> > > >= > > >>>><br></div><div>> > > > > > = >>>> If so, I suspect its because we drive the<br></div><div= >> > > tcp_hpts_softclock() routine from userret(), in order to= avoid tons<br></div><div>> > > of timer interrupts and context= switches. To test this theory, you<br></div><div>> > = > could apply a patch like:<br></div><div>> > > > > &g= t; >>><br></div><div>> > > > > > >>>= It's affecting overall system performance, bonnie was just<br></div><di= v>> > > a way to<br></div><div>> > > > > > &g= t;>> get some numbers to compare.<br></div><div>> > > >= ; > > >>><br></div><div>> > > > > > >= ;>> Tomorrow I will test patch.<br></div><div>> > > > = > > >>><br></div><div>> > > > > > >&= gt;> Thanks!<br></div><div>> > > > > > >>>= <br></div><div>> > > > > > >>> --<br></div><d= iv>> > > > > > >>> Nuno Teixeira<br></div><di= v>> > > > > > >>> FreeBSD Committer (ports)<b= r></div><div>> > > > > > >><br></div><div>> &= gt; > > > > >><br></div><div>> > > > > = > >><br></div><div>> > > > > > >> --<br= ></div><div>> > > > > > >> Nuno Teixeira<br></di= v><div>> > > > > > >> FreeBSD Committer (ports)<= br></div><div>> > > > > ><br></div><div>> > >= > ><br></div><div>> > > <br></div><div>> >&nb= sp;<br></div><div><br></div><div>> diff --git a/sys/netinet/tcp_hpts.= c b/sys/netinet/tcp_hpts.c<br></div><div>> index 8c4d2d41a3eb..eadbee= 19f69c 100644<br></div><div>> --- a/sys/netinet/tcp_hpts.c<br></div><= div>> +++ b/sys/netinet/tcp_hpts.c<br></div><div>> @@ -216,6 +216,= 7 @@ struct tcp_hpts_entry {<br></div><div>> void *ie_cookie;<= br></div><div>> uint16_t p_num; /* The hpts number one per cp= u */<br></div><div>> uint16_t p_cpu; /* The hpts CPU */<br></= div><div>> + uint8_t hit_callout_thresh;<br></div><div>> /*= There is extra space in here */<br></div><div>> /* Cache line= 0x100 */<br></div><div>> struct callout co __aligned(CACHE_LI= NE_SIZE);<br></div><div>> @@ -269,6 +270,11 @@ static struct hpts_dom= ain_info {<br></div><div>> int cpu[MAXCPU];<br></div><div>>= } hpts_domains[MAXMEMDOM];<br></div><div>> <br></di= v><div>> +counter_u64_t hpts_that_need_softclock;<br></div><div>> = +SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO, needsoftclock, C= TLFLAG_RD,<br></div><div>> + &hpts_that_need_so= ftclock,<br></div><div>> + "Number of hpts threads = that need softclock");<br></div><div>> +<br></div><div>> cou= nter_u64_t hpts_hopelessly_behind;<br></div><div>> <br></d= iv><div>> SYSCTL_COUNTER_U64(_net_inet_tcp_hpts_stats, OID_AUTO= , hopeless, CTLFLAG_RD,<br></div><div>> @@ -334,7 +340,7 @@ SYSCTL_IN= T(_net_inet_tcp_hpts, OID_AUTO, precision, CTLFLAG_RW,<br></div><div>>= ; &tcp_hpts_precision, 120,<br></div><= div>> "Value for PRE() precision of cal= lout");<br></div><div>> SYSCTL_INT(_net_inet_tcp_hpts, OID_AUTO= , cnt_thresh, CTLFLAG_RW,<br></div><div>> - &co= nn_cnt_thresh, 0,<br></div><div>> + &conn_cnt_t= hresh, DEFAULT_CONNECTION_THESHOLD,<br></div><div>> = "How many connections (below) make us use the callout based= mechanism");<br></div><div>> SYSCTL_INT(_net_inet_tcp_hpts, OI= D_AUTO, logging, CTLFLAG_RW,<br></div><div>> &= nbsp; &hpts_does_tp_logging, 0,<br></div><div>> @@ -1548,6 +1554,= 9 @@ __tcp_run_hpts(void)<br></div><div>> struct tcp_hpts_entr= y *hpts;<br></div><div>> int ticks_ran;<br></div><div>>&nbs= p; <br></div><div>> + if (counter_u64_fetch(hpts_that_need_softc= lock) =3D=3D 0)<br></div><div>> + return;<br></div><div>> +<br></= div><div>> hpts =3D tcp_choose_hpts_to_run();<br></div><div>&g= t; <br></div><div>> if (hpts->p_hpts_active) {<b= r></div><div>> @@ -1683,6 +1692,13 @@ tcp_hpts_thread(void *ctx)<br><= /div><div>> ticks_ran =3D tcp_hptsi(hpts, 1);<br></div><div>&g= t; tv.tv_sec =3D 0;<br></div><div>> tv.tv_usec =3D hpts= ->p_hpts_sleep_time * HPTS_TICKS_PER_SLOT;<br></div><div>> + if ((= hpts->p_on_queue_cnt > conn_cnt_thresh) && (hpts->hit_c= allout_thresh =3D=3D 0)) {<br></div><div>> + hpts->hit_callout_th= resh =3D 1;<br></div><div>> + counter_u64_add(hpts_that_need_softclo= ck, 1);<br></div><div>> + } else if ((hpts->p_on_queue_cnt <=3D= conn_cnt_thresh) && (hpts->hit_callout_thresh =3D=3D 1)) {<b= r></div><div>> + hpts->hit_callout_thresh =3D 0;<br></div><div>&g= t; + counter_u64_add(hpts_that_need_softclock, -1);<br></div><div>> = + }<br></div><div>> if (hpts->p_on_queue_cnt >=3D conn_c= nt_thresh) {<br></div><div>> if(hpts->p_direct_wake =3D=3D= 0) {<br></div><div>> /*<br></div><div>> @@ -1818,6 +1834= ,7 @@ tcp_hpts_mod_load(void)<br></div><div>> cpu_top =3D NULL= ;<br></div><div>> #endif<br></div><div>> tcp_pace.rp_= num_hptss =3D ncpus;<br></div><div>> + hpts_that_need_softclock =3D c= ounter_u64_alloc(M_WAITOK);<br></div><div>> hpts_hopelessly_be= hind =3D counter_u64_alloc(M_WAITOK);<br></div><div>> hpts_loo= ps =3D counter_u64_alloc(M_WAITOK);<br></div><div>> back_tosle= ep =3D counter_u64_alloc(M_WAITOK);<br></div><div>> @@ -2042,6 +2059,= 7 @@ tcp_hpts_mod_unload(void)<br></div><div>> free(tcp_pace.g= rps, M_TCPHPTS);<br></div><div>> #endif<br></div><div>> = ; <br></div><div>> + counter_u64_free(hpts_that_need_softclock);= <br></div><div>> counter_u64_free(hpts_hopelessly_behind);<br>= </div><div>> counter_u64_free(hpts_loops);<br></div><div>>&= nbsp; counter_u64_free(back_tosleep);<br></div><div><br></div><div><br>= </div></blockquote><div><br></div></body></html> --3b39556bddae44fcbfa6d30004956a6c--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?fc160c79-1884-4f68-8310-35e7ac0b9dd6>