Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 06 Apr 2022 13:28:45 +0200
From:      egoitz@ramattack.net
To:        freebsd-fs@freebsd.org, freebsd-hackers@freebsd.org, freebsd-performance@freebsd.org
Subject:   Re: Desperate with 870 QVO and ZFS
Message-ID:  <ccf20a30c29ab3e42526d0d5f22124e4@ramattack.net>
In-Reply-To: <4e98275152e23141eae40dbe7ba5571f@ramattack.net>
References:  <4e98275152e23141eae40dbe7ba5571f@ramattack.net>

next in thread | previous in thread | raw e-mail | index | archive | help
--=_2094f57367545a643dbb74ca4f8ba24d
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8

The most extrange thing is... When machine boots ARC is in 40 value of
GB used (for instance), but later decreases to 20GB (and this is not an
example... is exact) in all my servers.... it's like if the ARC metadata
which is more or less 17GB would limite the whole ARC..... 

With the traffic of this machines, it should I suppose the ARC should be
larger than it is... and ARC in loader.conf is limited to 64GB (the half
the ram this machines have)

El 2022-04-06 13:15, egoitz@ramattack.net escribió:

> ATENCION: Este correo se ha enviado desde fuera de la organización. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro.
> 
> Good morning,
> 
> I write this post with the expectation that perhaps someone could help me 
> 
> I am running some mail servers with FreeBSD and ZFS. They use 870 QVO (not EVO or other Samsung SSD disks) disks as storage. They can easily have from 1500 to 2000 concurrent connections. The machines have 128GB of ram and the CPU is almost absolutely idle. The disk IO is normally at 30 or 40% percent at most.
> 
> The problem I'm facing is that they could be running just fine and suddenly at some peak hour, the IO goes to 60 or 70% and the machine becomes extremely slow. ZFS is all by default, except the sync parameter which is set disabled. Apart from that the ARC is limited to 64GB. But even this is extremely odd. The used ARC is near 20GB. I have seen, that meta cache in arc is very near to the limit that FreeBSD automatically sets depending on the size of the ARC you set. It seems that almost all ARC is used by meta cache. I have seen this effect in all my mail servers with this hardware and software config.
> 
> I do attach a zfs-stats output, but from now that the servers are not so loaded as described. I do explain. I run a couple of Cyrus instances in these servers. One as master, one as slave on each server. The commented situation from above, happens when both Cyrus instances become master, so when we are using two Cyrus instances giving service in the same machine. For avoiding issues, know we have balanced and we have a master and a slave in each server. You know, a slave instance has almost no io and only a single connection for replication. So the zfs-stats output is from now we have let's say half of load in each server, because they have one master and one slave instance.
> 
> As said before, when I place two masters in same server, perhaps all day works, but just at 11:00 am (for example) the IO goes to 60% (it doesn't increase) but it seems like if the IO where not being able to be served, let's say more than a limit. More than a concrete io limit (I'd say 60%).
> 
> I don't really know if, perhaps the QVO technology could be the guilty here.... because... they say are desktop computers disks... but later... I have get a nice performance when copying for instance mailboxes from five to five.... I can flood a gigabit interface when copying mailboxes between servers from five to five.... they seem to perform....
> 
> Could anyone please shed us some light in this issue?. I don't really know what to think.
> 
> Best regards, 
> 
> ATENCION: Este correo se ha enviado desde fuera de la organización. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro.
--=_2094f57367545a643dbb74ca4f8ba24d
Content-Type: multipart/related;
 boundary="=_6e3b5663f91f7c881feb9dbfb751600c"

--=_6e3b5663f91f7c881feb9dbfb751600c
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; charset=
=3DUTF-8" /></head><body style=3D'font-size: 10pt; font-family: Verdana,Gen=
eva,sans-serif'>
<p data-xf-p=3D"1">The most extrange thing is... When machine boots ARC is =
in 40 value of GB used (for instance), but later decreases to 20GB (and thi=
s is not an example... is exact) in all my servers.... it's like if the ARC=
 metadata which is more or less 17GB would limite the whole ARC.....</p>
<p data-xf-p=3D"1">&nbsp;</p>
<p data-xf-p=3D"1">With the traffic of this machines, it should I suppose t=
he ARC should be larger than it is... and ARC in loader.conf is limited to =
64GB (the half the ram this machines have)</p>
<div>&nbsp;</div>
<p><br /></p>
<p>El 2022-04-06 13:15, egoitz@ramattack.net escribi&oacute;:</p>
<blockquote type=3D"cite" style=3D"padding: 0 0.4em; border-left: #1010ff 2=
px solid; margin: 0"><!-- html ignored --><br />
<table style=3D"background-color: yellow; border: 1px solid red; font-famil=
y: Arial; font-size: 12px; font-weight: bold;" width=3D"100%">
<tbody>
<tr>
<td><span style=3D"color: red;">ATENCION:</span> Este correo se ha enviado =
desde fuera de la organizaci&oacute;n. No pinche en los enlaces ni abra los=
 adjuntos a no ser que reconozca el remitente y sepa que el contenido es se=
guro.</td>
</tr>
</tbody>
</table>
<br /> <!-- meta ignored --> <!-- article ignored -->
<div class=3D"bbWrapper">Good morning,<br /> <br /> I write this post with =
the expectation that perhaps someone could help me <img class=3D"smilie smi=
lie--sprite smilie--sprite1" title=3D"Smile    :)" src=3D"cid:1649244525624=
d796dc0952013070694@ramattack.net" alt=3D":)" /><br /> <br /> I am running =
some mail servers with FreeBSD and ZFS. They use 870 QVO (not EVO or other =
Samsung SSD disks) disks as storage. They can easily have from 1500 to 2000=
 concurrent connections. The machines have 128GB of ram and the CPU is almo=
st absolutely idle. The disk IO is normally at 30 or 40% percent at most.<b=
r /> <br /> The problem I'm facing is that they could be running just fine =
and suddenly at some peak hour, the IO goes to 60 or 70% and the machine be=
comes extremely slow. ZFS is all by default, except the sync parameter whic=
h is set disabled. Apart from that the ARC is limited to 64GB. But even thi=
s is extremely odd. The used ARC is near 20GB. I have seen, that meta cache=
 in arc is very near to the limit that FreeBSD automatically sets depending=
 on the size of the ARC you set. It seems that almost all ARC is used by me=
ta cache. I have seen this effect in all my mail servers with this hardware=
 and software config.<br /> <br /> I do attach a zfs-stats output, but from=
 now that the servers are not so loaded as described. I do explain. I run a=
 couple of Cyrus instances in these servers. One as master, one as slave on=
 each server. The commented situation from above, happens when both Cyrus i=
nstances become master, so when we are using two Cyrus instances giving ser=
vice in the same machine. For avoiding issues, know we have balanced and we=
 have a master and a slave in each server. You know, a slave instance has a=
lmost no io and only a single connection for replication. So the zfs-stats =
output is from now we have let's say half of load in each server, because t=
hey have one master and one slave instance.<br /> <br /> As said before, wh=
en I place two masters in same server, perhaps all day works, but just at 1=
1:00 am (for example) the IO goes to 60% (it doesn't increase) but it seems=
 like if the IO where not being able to be served, let's say more than a li=
mit. More than a concrete io limit (I'd say 60%).<br /> <br /> I don't real=
ly know if, perhaps the QVO technology could be the guilty here.... because=
=2E.. they say are desktop computers disks... but later... I have get a nic=
e performance when copying for instance mailboxes from five to five.... I c=
an flood a gigabit interface when copying mailboxes between servers from fi=
ve to five.... they seem to perform....<br /> <br /> Could anyone please sh=
ed us some light in this issue?. I don't really know what to think.<br /> <=
br /> Best regards,</div>
<div class=3D"js-selectToQuoteEnd">&nbsp;</div>
<p><br /></p>
<br /><br />
<table style=3D"background-color: yellow; border: 1px solid red; font-famil=
y: Arial; font-size: 12px; font-weight: bold;" width=3D"100%">
<tbody>
<tr>
<td><span style=3D"color: red;">ATENCION:</span> Este correo se ha enviado =
desde fuera de la organizaci&oacute;n. No pinche en los enlaces ni abra los=
 adjuntos a no ser que reconozca el remitente y sepa que el contenido es se=
guro.</td>
</tr>
</tbody>
</table>
</blockquote>
</body></html>

--=_6e3b5663f91f7c881feb9dbfb751600c
Content-Transfer-Encoding: base64
Content-ID: <1649244525624d796dc0952013070694@ramattack.net>
Content-Type: image/gif;
 name=d8974688.gif
Content-Disposition: inline;
 filename=d8974688.gif;
 size=42

R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
--=_6e3b5663f91f7c881feb9dbfb751600c--

--=_2094f57367545a643dbb74ca4f8ba24d--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ccf20a30c29ab3e42526d0d5f22124e4>