Date: Wed, 06 Apr 2022 17:48:31 +0200 From: egoitz@ramattack.net To: freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org, Freebsd performance <freebsd-performance@freebsd.org> Cc: owner-freebsd-fs@freebsd.org Subject: Re: Re: Desperate with 870 QVO and ZFS Message-ID: <29f0eee5b502758126bf4cfa2d8e3517@ramattack.net> In-Reply-To: <28e11d7ec0ac5dbea45f9f271fc28f06@ramattack.net> References: <4e98275152e23141eae40dbe7ba5571f@ramattack.net> <665236B1-8F61-4B0E-BD9B-7B501B8BD617@ultra-secure.de> <0ef282aee34b441f1991334e2edbcaec@ramattack.net> <28e11d7ec0ac5dbea45f9f271fc28f06@ramattack.net>
next in thread | previous in thread | raw e-mail | index | archive | help
--=_a4ce21118f18db79ad9328c5f2cebb5a Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 I have been thinking and.... I got the following tunables now : vfs.zfs.arc_meta_strategy: 0 vfs.zfs.arc_meta_limit: 17179869184 kstat.zfs.misc.arcstats.arc_meta_min: 4294967296 kstat.zfs.misc.arcstats.arc_meta_max: 19386809344 kstat.zfs.misc.arcstats.arc_meta_limit: 17179869184 kstat.zfs.misc.arcstats.arc_meta_used: 16870668480 vfs.zfs.arc_max: 68719476736 and top sais : ARC: 19G Total, 1505M MFU, 12G MRU, 6519K Anon, 175M Header, 5687M Other When using even 128GB of vfs.zfs.arc_max (instead of 64GB I have now set) the ARC wasn't approximating to it's max usable size.... Can perhaps that could have something to do with that fact that arc meta values are almost at the limit set?. Perhaps increasing vfs.zfs.arc_meta_limit or kstat.zfs.misc.arcstats.arc_meta_limit (I suppose the first one is the one to increase) could cause a better performance and perhaps a better usage and better take advantage of having 64GB max of ARC set?. I say it because now it doesn't use more than 19GB in total ARC memory.... As always said, any opinion or idea would be very highly appreciated. Cheers, El 2022-04-06 17:30, egoitz@ramattack.net escribió: > ATENCION: Este correo se ha enviado desde fuera de la organización. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro. > > One perhaps important note!! > > When this happens... almost all processes appear in top in the following state: > > txg state or > > txg-> > > bio.... > > perhaps should the the vfs.zfs.dirty_data_max, vfs.zfs.txg.timeout, vfs.zfs.vdev.async_write_active_max_dirty_percent be increased, decreased.... I'm afraid of doing some chage ana finally ending up with an inestable server.... I'm not an expert in handling these values.... > > Any recommendation?. > > Best regards, > > El 2022-04-06 16:36, egoitz@ramattack.net escribió: > > ATENCION: Este correo se ha enviado desde fuera de la organización. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro. > > Hi Rainer! > > Thank you so much for your help :) :) > > Well I assume they are in a datacenter and should not be a power outage.... > > About dataset size... yes... our ones are big... they can be 3-4 TB easily each dataset..... > > We bought them, because as they are for mailboxes and mailboxes grow and grow.... for having space for hosting them... > > We knew they had some speed issues, but those speed issues, we thought (as Samsung explains in the QVO site) they started after exceeding the speeding buffer this disks have. We though that meanwhile you didn't exceed it's capacity (the capacity of the speeding buffer) no speed problem arises. Perhaps we were wrong?. > > Best regards, > > El 2022-04-06 14:56, Rainer Duffner escribió: > > Am 06.04.2022 um 13:15 schrieb egoitz@ramattack.net: > I don't really know if, perhaps the QVO technology could be the guilty here.... because... they say are desktop computers disks... but later. > > Yeah, they are. > > Most likely, they don't have some sort of super-cap. > > A power-failure might totally toast the filesystem. > > These disks are - IMO - designed to accelerate read-operations. Their sustained write-performance is usually mediocre, at best. > > They might work well for small data-sets - because that is really written to some cache and the firmware just claims it's „written", but once the data-set becomes big enough, they are about as fast as a fast SATA-disk. > > https://www.tomshardware.com/reviews/samsung-970-evo-plus-ssd,5608.html --=_a4ce21118f18db79ad9328c5f2cebb5a Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=UTF-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; charset= =3DUTF-8" /></head><body style=3D'font-size: 10pt; font-family: Verdana,Gen= eva,sans-serif'> <p>I have been thinking and.... I got the following tunables now : </p> <p>vfs.zfs.arc_meta_strategy: 0<br />vfs.zfs.arc_meta_limit: 17179869184<br= />kstat.zfs.misc.arcstats.arc_meta_min: 4294967296<br />kstat.zfs.misc.arc= stats.arc_meta_max: 19386809344<br />kstat.zfs.misc.arcstats.arc_meta_limit= : 17179869184<br />kstat.zfs.misc.arcstats.arc_meta_used: 16870668480<br />= vfs.zfs.arc_max: 68719476736<br /><br /></p> <p>and top sais :</p> <p>ARC: 19G Total, 1505M MFU, 12G MRU, 6519K Anon, 175M Header, 5687M Other= </p> <p><br /></p> <p>When using even 128GB of vfs.zfs.arc_max (instead of 64GB I have now set= ) the ARC wasn't approximating to it's max usable size.... Can perhaps that= could have something to do with that fact that arc meta values are almost = at the limit set?. Perhaps increasing vfs.zfs.arc_meta_limit or kstat.zfs= =2Emisc.arcstats.arc_meta_limit (I suppose the first one is the one to incr= ease) could cause a better performance and perhaps a better usage and bette= r take advantage of having 64GB max of ARC set?. I say it because now it do= esn't use more than 19GB in total ARC memory....</p> <p><br /></p> <p>As always said, any opinion or idea would be very highly appreciated.</p= > <p><br /></p> <p>Cheers,</p> <p><br /></p> <div> </div> <p><br /></p> <p>El 2022-04-06 17:30, egoitz@ramattack.net escribió:</p> <blockquote type=3D"cite" style=3D"padding: 0 0.4em; border-left: #1010ff 2= px solid; margin: 0"><!-- html ignored --><br /> <table style=3D"background-color: yellow; border: 1px solid red; font-famil= y: Arial; font-size: 12px; font-weight: bold;" width=3D"100%"> <tbody> <tr> <td><span style=3D"color: red;">ATENCION:</span> Este correo se ha enviado = desde fuera de la organización. No pinche en los enlaces ni abra los= adjuntos a no ser que reconozca el remitente y sepa que el contenido es se= guro.</td> </tr> </tbody> </table> <br /> <!-- meta ignored --> <p>One perhaps important note!!</p> <p><br /></p> <p>When this happens... almost all processes appear in top in the following= state:</p> <p><br /></p> <p>txg state or</p> <p>txg-></p> <p>bio....</p> <p><br /></p> <p>perhaps should the the vfs.zfs.dirty_data_max, vfs.zfs.txg.timeout, vfs= =2Ezfs.vdev.async_write_active_max_dirty_percent be increased, decreased.= =2E.. I'm afraid of doing some chage ana finally ending up with an inestabl= e server.... I'm not an expert in handling these values....</p> <p><br /></p> <p>Any recommendation?.</p> <p><br /></p> <p>Best regards,</p> <div> </div> <p><br /></p> <p>El 2022-04-06 16:36, egoitz@ramattack.net escribió:</p> <blockquote style=3D"padding: 0 0.4em; border-left: #1010ff 2px solid; marg= in: 0;"><br /> <table style=3D"background-color: yellow; border: 1px solid red; font-famil= y: Arial; font-size: 12px; font-weight: bold;" width=3D"100%"> <tbody> <tr> <td><span style=3D"color: red;">ATENCION:</span> Este correo se ha enviado = desde fuera de la organización. No pinche en los enlaces ni abra los= adjuntos a no ser que reconozca el remitente y sepa que el contenido es se= guro.</td> </tr> </tbody> </table> <br /> <p>Hi Rainer!</p> <p><br /></p> <p>Thank you so much for your help :) :)</p> <p>Well I assume they are in a datacenter and should not be a power outage= =2E...</p> <p>About dataset size... yes... our ones are big... they can be 3-4 TB easi= ly each dataset.....</p> <p>We bought them, because as they are for mailboxes and mailboxes grow and= grow.... for having space for hosting them...</p> <p>We knew they had some speed issues, but those speed issues, we thought (= as Samsung explains in the QVO site) they started after exceeding the speed= ing buffer this disks have. We though that meanwhile you didn't exceed it's= capacity (the capacity of the speeding buffer) no speed problem arises. Pe= rhaps we were wrong?.</p> <p><br /></p> <p>Best regards,</p> <p><br /></p> <p><br /></p> <p>El 2022-04-06 14:56, Rainer Duffner escribió:</p> <blockquote style=3D"padding: 0 0.4em; border-left: #1010ff 2px solid; marg= in: 0;"><br /> <div><br /> <blockquote style=3D"padding: 0 0.4em; border-left: #1010ff 2px solid; marg= in: 0;"> <div>Am 06.04.2022 um 13:15 schrieb <a href=3D"mailto:egoitz@ramattack.net"= >egoitz@ramattack.net</a>:</div> <br class=3D"Apple-interchange-newline" /> <div><span style=3D"caret-color: #000000; font-family: Verdana, Geneva, san= s-serif; font-size: 13.333333015441895px; font-style: normal; font-variant-= caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; = text-indent: 0px; text-transform: none; white-space: normal; word-spacing: = 0px; -webkit-text-stroke-width: 0px; text-decoration: none; float: none; di= splay: inline !important;">I don't really know if, perhaps the QVO technolo= gy could be the guilty here.... because... they say are desktop computers d= isks... but later.</span></div> </blockquote> </div> <br /> <div> </div> <div>Yeah, they are.</div> <div> </div> <div>Most likely, they don't have some sort of super-cap.</div> <div> </div> <div>A power-failure might totally toast the filesystem.</div> <div> </div> <div>These disks are - IMO - designed to accelerate read-operations= =2E Their sustained write-performance is usually mediocre, at best.</div> <div> </div> <div>They might work well for small data-sets - because that is really writ= ten to some cache and the firmware just claims it's „written", but on= ce the data-set becomes big enough, they are about as fast as a fast SATA-d= isk.</div> <div> </div> <div><a href=3D"https://www.tomshardware.com/reviews/samsung-970-evo-plus-s= sd,5608.html" target=3D"_blank" rel=3D"noopener noreferrer">https://www.tom= shardware.com/reviews/samsung-970-evo-plus-ssd,5608.html</a></div> <div> </div> <div> </div> <div> </div> </blockquote> </blockquote> </blockquote> </body></html> --=_a4ce21118f18db79ad9328c5f2cebb5a--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?29f0eee5b502758126bf4cfa2d8e3517>