From nobody Wed Apr 6 21:06:15 2022 X-Original-To: performance@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 0BE611A9C465 for ; Wed, 6 Apr 2022 21:06:26 +0000 (UTC) (envelope-from crest@rlwinm.de) Received: from mail.rlwinm.de (mail.rlwinm.de [138.201.35.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4KYcWP0nV9z4n1p for ; Wed, 6 Apr 2022 21:06:25 +0000 (UTC) (envelope-from crest@rlwinm.de) Received: from [IPV6:2001:16b8:6410:e900:8468:f98d:8c6b:de2c] (200116b86410e9008468f98d8c6bde2c.dip.versatel-1u1.de [IPv6:2001:16b8:6410:e900:8468:f98d:8c6b:de2c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-384) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.rlwinm.de (Postfix) with ESMTPSA id 092A92492B for ; Wed, 6 Apr 2022 21:06:17 +0000 (UTC) Content-Type: multipart/alternative; boundary="------------O9TimV1H5koHqUIsOKaj4g23" Message-ID: <803f008d-b91a-2a8d-88f9-3d2d091149df@rlwinm.de> Date: Wed, 6 Apr 2022 23:06:15 +0200 List-Id: Performance/tuning List-Archive: https://lists.freebsd.org/archives/freebsd-performance List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-performance@freebsd.org X-BeenThere: freebsd-performance@freebsd.org MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: {* 05.00 *}Re: Desperate with 870 QVO and ZFS Content-Language: en-US To: performance@freebsd.org References: <4e98275152e23141eae40dbe7ba5571f@ramattack.net> <665236B1-8F61-4B0E-BD9B-7B501B8BD617@ultra-secure.de> <0ef282aee34b441f1991334e2edbcaec@ramattack.net> From: Jan Bramkamp In-Reply-To: X-Rspamd-Queue-Id: 4KYcWP0nV9z4n1p X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of crest@rlwinm.de designates 138.201.35.217 as permitted sender) smtp.mailfrom=crest@rlwinm.de X-Spamd-Result: default: False [-2.91 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; R_SPF_ALLOW(-0.20)[+mx]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[performance@freebsd.org]; RCPT_COUNT_ONE(0.00)[1]; NEURAL_HAM_LONG(-1.00)[-1.000]; DMARC_NA(0.00)[rlwinm.de]; NEURAL_HAM_SHORT(-0.61)[-0.612]; NEURAL_HAM_MEDIUM(-1.00)[-0.999]; MLMMJ_DEST(0.00)[performance]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ASN(0.00)[asn:24940, ipnet:138.201.0.0/16, country:DE]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; RECEIVED_SPAMHAUS_PBL(0.00)[2001:16b8:6410:e900:8468:f98d:8c6b:de2c:received] X-ThisMailContainsUnwantedMimeParts: N This is a multi-part message in MIME format. --------------O9TimV1H5koHqUIsOKaj4g23 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 06.04.22 18:34, egoitz@ramattack.net wrote: > > Hi Stefan! > > > Thank you so much for your answer!!. I do answer below in green bold > for instance... for a better distinction.... > > > Very thankful for all your comments Stefan!!! :) :) :) > > > Cheers!! > > > El 2022-04-06 17:43, Stefan Esser escribió: > >> ATENCION >> ATENCION >> ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. >> No pinche en los enlaces ni abra los adjuntos a no ser que reconozca >> el remitente y sepa que el contenido es seguro. >> >> Am 06.04.22 um 16:36 schrieb egoitz@ramattack.net: >>> Hi Rainer! >>> >>> Thank you so much for your help :) :) >>> >>> Well I assume they are in a datacenter and should not be a power >>> outage.... >>> >>> About dataset size... yes... our ones are big... they can be 3-4 TB >>> easily each >>> dataset..... >>> >>> We bought them, because as they are for mailboxes and mailboxes grow and >>> grow.... for having space for hosting them... >> >> Which mailbox format (e.g. mbox, maildir, ...) do you use? >> *I'm running Cyrus imap so sort of Maildir... too many little files >> normally..... Sometimes directories with tons of little files....* >> >>> We knew they had some speed issues, but those speed issues, we >>> thought (as >>> Samsung explains in the QVO site) they started after exceeding the >>> speeding >>> buffer this disks have. We though that meanwhile you didn't exceed it's >>> capacity (the capacity of the speeding buffer) no speed problem >>> arises. Perhaps >>> we were wrong?. >> >> These drives are meant for small loads in a typical PC use case, >> i.e. some installations of software in the few GB range, else only >> files of a few MB being written, perhaps an import of media files >> that range from tens to a few hundred MB at a time, but less often >> than once a day. >> *We move, you know... lots of little files... and lot's of different >> concurrent modifications by 1500-2000 concurrent imap connections we >> have...* >> >> As the SSD fills, the space available for the single level write >> cache gets smaller >> *The single level write cache is the cache these ssd drivers have, >> for compensating the speed issues they have due to using qlc memory?. >> Do you refer to that?. Sorry I don't understand well this paragraph.* A single flash cell can be thought of as a software adjustable resistor as part of a voltage divider with a fixed resistor. Storing just a single bit per flash cell allows very fast writes and long lifetimes for each flash cell at the cost of low data density. You cheaped out and bough the crappiest type of consumer SSDs. These SSDs are optimized for one thing: price per capacity (at reasonable read performance). They accomplish this by exploiting the expected user behavior of modifying only small subsets of the stored data in short bursts and buying (a lot more capacity) than they use. You deployed them in a mail server facing at least continuous writes for hours on end most days of the week. As average load increases and the cheap SSDs fill up less and less unallocated flash can be used to cache and the fast SLC cache fills up. The SSD firmware now has to stop accepting new requests from the SATA port and because only ~30 operations can be queued per SATA disk and the ordering requirements between those operations not even reads can be satisfied while the cache gets slowly written out storing four bits per flash cell instead of one. To the user this appears as the system almost hanging because every uncached read and sync write takes tens to 100s of milliseconds instead of less than 3ms. No amount of file system or driver tuning can truly fix this design flaw/compromise without severely limiting the write throughput in software to stay below the sustained drain rate of the SLC cache. If you want to invest time, pain and suffering to squish the most out of this hardware look into the ~2015 CAM I/O scheduler work Netflix upstreamed back to FreeBSD. Enabling this requires at least building and installing your own kernel with this feature enabled, setting acceptable latency targets and defining the read/write mix the scheduler should maintain. I don't expect you'll get satisfactory results out of those disks even with lots of experimentation. If you want to experiment with I/O scheduling on cheap SSDs start by *migrating all production workloads* out of your lab environment. The only safe and quick way out of this mess is for you to replace all QVO SSDs with at least as large SSDs designed for sustained write workloads. --------------O9TimV1H5koHqUIsOKaj4g23 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit


On 06.04.22 18:34, egoitz@ramattack.net wrote:

Hi Stefan!


Thank you so much for your answer!!. I do answer below in green bold for instance... for a better distinction....


Very thankful for all your comments Stefan!!! :) :) :)


Cheers!!

 


El 2022-04-06 17:43, Stefan Esser escribió:

ATENCION
ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. No pinche en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa que el contenido es seguro.

Am 06.04.22 um 16:36 schrieb egoitz@ramattack.net:
Hi Rainer!

Thank you so much for your help :) :)

Well I assume they are in a datacenter and should not be a power outage....

About dataset size... yes... our ones are big... they can be 3-4 TB easily each
dataset.....

We bought them, because as they are for mailboxes and mailboxes grow and
grow.... for having space for hosting them...

Which mailbox format (e.g. mbox, maildir, ...) do you use?
 
I'm running Cyrus imap so sort of Maildir... too many little files normally..... Sometimes directories with tons of little files....

We knew they had some speed issues, but those speed issues, we thought (as
Samsung explains in the QVO site) they started after exceeding the speeding
buffer this disks have. We though that meanwhile you didn't exceed it's
capacity (the capacity of the speeding buffer) no speed problem arises. Perhaps
we were wrong?.

These drives are meant for small loads in a typical PC use case,
i.e. some installations of software in the few GB range, else only
files of a few MB being written, perhaps an import of media files
that range from tens to a few hundred MB at a time, but less often
than once a day.
 
We move, you know... lots of little files... and lot's of different concurrent modifications by 1500-2000 concurrent imap connections we have...

As the SSD fills, the space available for the single level write
cache gets smaller
 
The single level write cache is the cache these ssd drivers have, for compensating the speed issues they have due to using qlc memory?. Do you refer to that?. Sorry I don't understand well this paragraph.

A single flash cell can be thought of as a software adjustable resistor as part of a voltage divider with a fixed resistor. Storing just a single bit per flash cell allows very fast writes and long lifetimes for each flash cell at the cost of low data density. You cheaped out and bough the crappiest type of consumer SSDs. These SSDs are optimized for one thing: price per capacity (at reasonable read performance). They accomplish this by exploiting the expected user behavior of modifying only small subsets of the stored data in short bursts and buying (a lot more capacity) than they use. You deployed them in a mail server facing at least continuous writes for hours on end most days of the week. As average load increases and the cheap SSDs fill up less and less unallocated flash can be used to cache and the fast SLC cache fills up. The SSD firmware now has to stop accepting new requests from the SATA port and because only ~30 operations can be queued per SATA disk and the ordering requirements between those operations not even reads can be satisfied while the cache gets slowly written out storing four bits per flash cell instead of one. To the user this appears as the system almost hanging because every uncached read and sync write takes tens to 100s of milliseconds instead of less than 3ms. No amount of file system or driver tuning can truly fix this design flaw/compromise without severely limiting the write throughput in software to stay below the sustained drain rate of the SLC cache. If you want to invest time, pain and suffering to squish the most out of this hardware look into the ~2015 CAM I/O scheduler work Netflix upstreamed back to FreeBSD. Enabling this requires at least building and installing your own kernel with this feature enabled, setting acceptable latency targets and defining the read/write mix the scheduler should maintain.

I don't expect you'll get satisfactory results out of those disks even with lots of experimentation. If you want to experiment with I/O scheduling on cheap SSDs start by migrating all production workloads out of your lab environment. The only safe and quick way out of this mess is for you to replace all QVO SSDs with at least as large SSDs designed for sustained write workloads.

--------------O9TimV1H5koHqUIsOKaj4g23--