From owner-freebsd-questions Sat May 26 15:58:59 2001 Delivered-To: freebsd-questions@freebsd.org Received: from web9506.mail.yahoo.com (web9506.mail.yahoo.com [216.136.129.20]) by hub.freebsd.org (Postfix) with SMTP id C83A837B423 for ; Sat, 26 May 2001 15:58:55 -0700 (PDT) (envelope-from felix_hdez@yahoo.com) Message-ID: <20010526225855.16007.qmail@web9506.mail.yahoo.com> Received: from [152.2.142.108] by web9506.mail.yahoo.com; Sat, 26 May 2001 15:58:55 PDT Date: Sat, 26 May 2001 15:58:55 -0700 (PDT) From: Felix Hernandez Subject: Re: Slower tape drive when compression off To: Bill Moran Cc: freebsd-questions@FreeBSD.ORG In-Reply-To: <3B102A2A.8AF59769@iowna.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG --- Bill Moran wrote: > Felix Hernandez wrote: > > > > Hi, > > > > I have a Quantum DLT8000 tape drive (40/80 GB), > and > > I'm puzzled by the following fact: writing the > tape in > > compressed mode (mt comp on) is faster (4 MB/s) > than > > in uncompressed mode (mt comp off, 2 MB/s). I use > tar > > (1.13) for my backups, and the tape is attached to > an > > IBM Netfinity 7100 running FreeBSD 4.1. I have > already > > tried a large and fixed blocksize (mt blocksize > 10240, > > tar -b 20), but it didn't help. Do you know why > this > > happens? How can I fix it? I don't want to use > > compression, since the data is already gzipped, > and > > recompressing it wastes 5 GB. > > I'm a little confused. > You say your compressing the data with tar and then > using compression on > the tape drive as well? > If that's the case, I don't know what the issue is. > If you're piping the data through raw and > compression is faster, it's > probably because it's measuring the data rate before > it compresses the > data, and then gets about 50% compression. The slow > performer in any of > these cases is going to be the tape itself. The original data is already compressed, so compressing it again only makes it larger. I just tried an experiment, suggested by Ian Dowse, in which I only use dd, so we can discard tar as the source of the problem: root@oberon $$$ dd if=/dev/urandom bs=10k count=5000 > junk 5000+0 records in 5000+0 records out 51200000 bytes transferred in 41.783845 secs (1225354 bytes/sec) root@oberon $$$ mt comp off root@oberon $$$ dd if=junk of=/dev/rsa0 bs=10k 5000+0 records in 5000+0 records out 51200000 bytes transferred in 20.322325 secs (2519397 bytes/sec) root@oberon $$$ mt comp on root@oberon $$$ dd if=junk of=/dev/rsa0 bs=10k 5000+0 records in 5000+0 records out 51200000 bytes transferred in 12.665153 secs (4042588 bytes/sec) ("junk" cannot be compressed, since it is completely random -- it only gets larger after gzipping it) The output of "mt status" is: Mode Density Blocksize bpi Compression Current: 0x41 variable 0 IDRC ---------available modes--------- 0: 0x41 variable 0 IDRC 1: 0x41 variable 0 IDRC 2: 0x41 variable 0 IDRC 3: 0x41 variable 0 IDRC --------------------------------- Current Driver State: at rest. --------------------------------- File Number: 0 Record Number: 0 Thank you for your quick reply. I hope someone can help figure this out. __________________________________________________ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message