Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 26 Dec 1996 20:14:02 +0200 (EET)
From:      Heikki Suonsivu <hsu@clinet.fi>
To:        joerg_wunsch@uriah.heep.sax.de (Joerg Wunsch)
Cc:        freebsd-current@freebsd.org
Subject:   Re: DAT: reading with blocksize=256K
Message-ID:  <199612261814.UAA03704@cantina.clinet.fi>
In-Reply-To: J Wunsch's message of 24 Dec 1996 10:06:00 %2B0200
References:  <199612240747.IAA10234@uriah.heep.sax.de>

next in thread | previous in thread | raw e-mail | index | archive | help

In article <199612240747.IAA10234@uriah.heep.sax.de> J Wunsch <j@uriah.heep.sax.de> writes:
   > I just got a DAT cartridge with a tar backup. It seems that the backup 
   > was made with a blocksize of 256K. Isn't it possible to get the data into 
   > my pc with current (it looks like there's a limit of 64K)??

   This has been discussed at lenth already: it's currently limited by
   physio(9) to chunks of at most 64 KB size, due to the limitations in
   the scatter/gather list of some SCSI controllers that don't allow for
   more than 16 scatter/gather segments.

I have been able to read my old backups written with 1024k blocksize.  They
secret was to use ddd instead of dd.  Not debugger interface ddd, but the
old streaming dd by jtv@hut.fi, it is available somewhere in nic.funet.fi,
and I think team might be similar.  I do not know what happens.  ddd
certainly does not do anything smart.

-- 
Heikki Suonsivu, T{ysikuu 10 C 83/02210 Espoo/FINLAND, hsu@clinet.fi
mobile +358-40-5519679 work +358-9-43542270 fax -4555276



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199612261814.UAA03704>