Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Feb 2015 20:05:05 +0100
From:      Harald Schmalzbauer <h.schmalzbauer@omnilan.de>
To:        "Kenneth D. Merry" <ken@FreeBSD.ORG>
Cc:        current@FreeBSD.ORG, scsi@FreeBSD.ORG
Subject:   Re: sa(4) driver changes available for test
Message-ID:  <54F0BFE1.4000000@omnilan.de>
In-Reply-To: <20150226224202.GA14015@mithlond.kdm.org>
References:  <20150214003232.GA63990@mithlond.kdm.org> <20150219001347.GA57416@mithlond.kdm.org> <54EEEE1E.7020007@omnilan.de> <20150226224202.GA14015@mithlond.kdm.org>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigDA4CCF1FDF8A51EB47916334
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

 Bez=FCglich Kenneth D. Merry's Nachricht vom 26.02.2015 23:42 (localtime=
):

=85
>>> And (untested) patches against FreeBSD stable/10 as of SVN revision 2=
78974:
>>>
>>> http://people.freebsd.org/~ken/sa_changes.stable_10.20150218.1.txt
=85

> I'm glad it is working well for you!  You can do larger I/O sizes with =
the
> Adaptec by changing your MAXPHYS and DFLTPHYS values in your kernel con=
fig
> file.  e.g.:
>
> options         MAXPHYS=3D(1024*1024)
> options         DFLTPHYS=3D(1024*1024)
>
> If you set those values larger, you won't be able to do more than 132K =
with
> the sym(4) driver on an x86 box.  (It limits the maximum I/O size to 33=

> segments * PAGE_SIZE.)

Thanks for the hint! I wasn't aware that kern.cam.sa.N.maxio has driver
limitations corresponding to systems MAX/DFLTPHYS. I thought only
silicon limitations define it's value.
But in order to have a best matching pre-production test-environment, I
nevertheless replaced it, now using mpt(4) instead of ahc(4)/ahc_pci on
PCI-X@S3210 (for parallel tape drives I consistently have mpt(4)@PCIe,
which is the same LSI(53c1020) chip but with on-board PCI-X<->PCIe bridge=
).

Still just works fine ! :-) (stable_10.20150218.1-patchset with LTO2,
LTO3 and DDS5)
With DDS5, densitiy is reported as "unknown". If I remember correctly,
you have your DDS4 reporting "DDS4"?

> > therefore I'd like to point to the new port misc/vdmfec
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D197950
> That looks cool. :)  I'm not a ports committer, but hopefully one of th=
em
> will pick it up.

Cool it is indeed, but whether it's really usefull or not is beyond my
expertise. I couldn't collect much MT experience yet.
I know that LTO and similar "modern" MT technology do their own ECC (in
the meaning of erasure code, mostly Reed-Solomon).
What I don't know (but wanting to be best prepared for) is how arbitrary
LTO drives behave, if the one (1) in 10^17 bits was detected to be
uncorrectable.
If it wasn't detected, the post erasure code (vdmfec in that case) would
help for sure.
But If the drive just cuts the output, or stops streaming at all, vdmfec
was useless=85

According to excerpts of "Study of Perpendicular AME Media in a Linear
Tape Drive", LTO-4 has a soft read error rate of 1 in 10^6 bits and DDS
has 1 in 10^4 bits (!!!, according to HP C1537A DDS 3 - ACT/Apricot). So
with DDS, _every_ single block pax(1) writes to tape needs to be
internally corrected! Of course, nobody wants zfs' send output stream to
DDS, it's much too slow/small, but just to mention.

For archives of zfs streams, I don't feel safe relying on the tape
drives' FEC, which was designed for backup solutions which do their own
blocking+cheksumming, so the very seldom to expect uncorrectable read
error would at worst lead to some/single unrecoverable files =96 even in
case of database files most likely post-recoverable.
But with one flipped bit in the zfs stream, you'd loose hundred of
gigabytes, completely unrecoverable!
As long as the tape keeps spitting complete blocks, even in the case
when the tape knows that the output is not correct, vdmfec ought to be
the holy grail :-)

Going slightly more off topic:
One hot candidate for beeing another holy grail, is mbuffer(1) for me.

I don't know if tar/pax/cpio do any kind of FIFO buffering at all, but
for zfs-send-streaming, mbuffer(1) is obligatory. Even with really huge
block sizes, you can't saturate LTO-3 native rate. With mbuffer(1) it's
no problem to stream LTO-4 native rate with a tape-transport-blocksize
of 32k.
Btw, besides the FIFO-buffering, I'm missing star(1) also for it's
multi-volume support. tar(1) in base isn't really useful for tape
buddies =96 IMHO it's hardly adequate for any purpose and I don't
understand it's widespread usage=85 Most likely the absence of dump(8) fo=
r
zfs misleads to tar(1) ;-)

Were there ever thoughts about implementing FIFO-buffering into sa(4)?
We don't have mbuffer(1) in base, but I think, to complete FreeBSD's
tape support, users should find all technology/tools, needed for using
modern tape drives, in base. If sa(4) could provide sysctl-controlled
FIFO-buffering, some base tools were a bit more apropriate for tape
usage, I think.

Thanks,

-Harry






--------------enigDA4CCF1FDF8A51EB47916334
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)

iEYEARECAAYFAlTwv+YACgkQLDqVQ9VXb8iCnwCfXXu0XGD4ukaQRuDFIqD+5T9A
bxwAoMehRAjJVEAx1LXBp9Z36sjRqHsf
=KhWc
-----END PGP SIGNATURE-----

--------------enigDA4CCF1FDF8A51EB47916334--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?54F0BFE1.4000000>