Date: Fri, 11 Oct 2013 14:52:10 -0700 From: John-Mark Gurney <jmg@funkthat.com> To: Maksim Yevmenkin <emax@freebsd.org> Cc: "current@freebsd.org" <current@freebsd.org> Subject: Re: [rfc] small bioq patch Message-ID: <20131011215210.GY56872@funkthat.com> In-Reply-To: <CAFPOs6pXhDjj1JTY0JNaw8g=zvtw9NgDVeJTQW-=31jwj321mQ@mail.gmail.com> References: <CAFPOs6pXhDjj1JTY0JNaw8g=zvtw9NgDVeJTQW-=31jwj321mQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Maksim Yevmenkin wrote this message on Fri, Oct 11, 2013 at 11:17 -0700: > i would like to submit the attached bioq patch for review and > comments. this is proof of concept. it helps with smoothing disk read > service times and arrear to eliminates outliers. please see attached > pictures (about a week worth of data) > > - c034 "control" unmodified system > - c044 patched system Can you describe how you got this data? Were you using the gstat code or some other code? Also, was your control system w/ the patch, but w/ the sysctl set to zero to possibly eliminate any code alignment issues? > graphs show max/avg disk read service times for both systems across 36 > spinning drives. both systems are relatively busy serving production > traffic (about 10 Gbps at peak). grey shaded areas on the graphs > represent time when systems are refreshing their content, i.e. disks > are both reading and writing at the same time. Can you describe why you think this change makes an improvement? Unless you're running 10k or 15k RPM drives, 128 seems like a large number.. as that's about halve number of IOPs that a normal HD handles in a second.. I assume you must be regularly seeing queue depths of 128+ for this code to make a difference, do you see that w/ gstat? Also, do you see a similar throughput of the system? -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20131011215210.GY56872>