Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 15 Jan 2022 04:54:01 +0100
From:      Tomasz CEDRO <tomek@cedro.info>
To:        "Kevin P. Neal" <kpn@neutralgood.org>
Cc:        "Greg 'groggy' Lehey" <grog@freebsd.org>, David Christensen <dpchrist@holgerdanske.com>,  FreeBSD Questions Mailing List <freebsd-questions@freebsd.org>
Subject:   Re: zero filling a storage device (was: dd and mbr)
Message-ID:  <CAM8r67AK9AtOGpf2QnmzKn%2BSA6HvwdxjDqu0r_sLcJisaN3pYg@mail.gmail.com>
In-Reply-To: <YeI7g%2Bm0FSppvUDr@neutralgood.org>
References:  <77680665-7ddb-23c5-e866-05d112339b60@holgerdanske.com> <20220114023002.GP61872@eureka.lemis.com> <YeDryNdYe1S20wd2@neutralgood.org> <20220114045558.GQ61872@eureka.lemis.com> <YeI7g%2Bm0FSppvUDr@neutralgood.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Jan 15, 2022 at 4:13 AM Kevin P. Neal wrote:
>
> On Fri, Jan 14, 2022 at 03:55:58PM +1100, Greg 'groggy' Lehey wrote:
> > On Thursday, 13 January 2022 at 22:19:36 -0500, Kevin P. Neal wrote:
> > > I'm not 100% that an SSD will always keep a logical block assigned
> > > to a physical block.  And I'm not 100% certain that an SSD won't
> > > notice that all zeros are being written to a block and just optimize
> > > out the write.
> >
> > If I understand what you mean here, you're suggesting that SSDs may
> > keep a list of zeroed-out blocks and just optimize them away?  It's
> > possible, though I haven't heard of it.  I don't know how often blocks
> > are completely zeroed out, but I suspect that it wouldn't be worth
> > it.  If they did, it would be a good advertisement, because it would
> > indirectly increase the storage capacity of the SSD.
>
> Yes, I believe I'm making sense to you at least. :)
>
> No, it would decrease wear on the drive. It would increase the life
> expectancy of the drive. See below.
>
> > Until proof of the contrary, I'd say "no, this doesn't happen".
>
> Doesn't ZFS notice all-zero blocks in some cases? I have a hazy memory
> of this being done when compression is turned off.
>
> Anyway, an SSD stores multiple blocks per page. They can only erase whole
> pages, and if multiple blocks are stored on that page then those blocks
> must be rewritten after the page is cleared. Then another block on the
> page can be written.
>
> This is why SSDs are overprovisioned. They try hard to keep the number of
> blocks written per-page down to avoid rewriting them when writing an
> otherwise unrelated other block.
>
> A mapping of logical (or wire addressable if you prefer) blocks is then
> needed. It would make sense to have that mapping indicate that a block is
> all zeros to avoid wasting space on pages. If you ask the drive for the
> contents of the block it will tell you it is all zeros. But no space in
> the drive need be used for the zeros.
>
> This doesn't increase the usable space on the drive. Rather, it decreases
> the wear on the drive.
>
> But I'm not an SSD firmware guy so I can't swear that all or even any
> drives actually optimize this way. "It makes sense to me." :)
> --
> Kevin P. Neal                                http://www.pobox.com/~kpn/

Using ZFS compression LZ4 may keep the repeating patterns out of the
disk so only compressed stuff lands to memory.. but this happens at
ZFS level not the disk level.. it also can save some disk usage I once
did a dd image of a 1TB drive that was partially empty and the image
only consumed around 1/3 of that space (the rest was marked as free).
LZ4 is amazingly fast and efficient :-)

Another thing is that there is no guarantee what happens on the plate
because disk firmware can do all sorts of the trick on-the-fly between
plate and the interface.. I guess this is not very different between
HDD and SSD.. except when you buy a "RED" class of disks that are
supposed to only perform raw read/write nothing in the middle. I
recently noticed that RED disks are 1/3 faster than GREEN disks, see
numbers below :-)

Here is a work in progress article on how to use M2.NVM drive with
PCI-E 4.0 controller on older motherboard with PCI-E 2.0 that only
provides onboard SATA :-) The funny thing is that BIOS does not see
neither controller nor the disk so it is not available for direct
boot, but running boot and kernel from SATA HDD allows it to use NVM
SSD as ZFS / with no additional drivers, NVM/UEFI firmwares, etc :-)

https://www.tomek.cedro.info/m2-nvm-disk-pci-e-controller-on-freebsd/

-- 
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAM8r67AK9AtOGpf2QnmzKn%2BSA6HvwdxjDqu0r_sLcJisaN3pYg>