Date: Fri, 4 May 2007 13:33:09 -0400 From: David Banning <david+dated+1178731994.335e16@skytracker.ca> To: dex <djdexter@gmail.com> Cc: questions@freebsd.org Subject: Re: can't zip large files 2gb > Message-ID: <20070504173308.GA22281@skytracker.ca> In-Reply-To: <c357d2a10705040959n317c3e42hb4ef48fc8b3c66f9@mail.gmail.com> References: <20070501195825.GA10269@skytracker.ca> <20070504155343.GA13432@skytracker.ca> <c357d2a10705040959n317c3e42hb4ef48fc8b3c66f9@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
> Try the same operation on a known working system, take that output > file and do a diff with that and the corrupt one after a 'strings', so > 'strings new.gz > new-text', 'strings corrupt.gz > corrupt-text', > 'diff new-text corrupt-text'. I'm just interested in how it's being > corrupted and maybe the strings output will tell you something. I don't have a separate system, but I tried the strings output of the tar before compression and the strings output of the tar -after- compression and uncompression - as I mentioned the size output is only two bites difference. The result was that the memory was exhausted on attempting a diff of the two files, but there was around a 1 meg difference between the two 1.5G ascii files. > Sorry if this was specified before, but did this just start happening > or is this the first time you've tried to gzip large files on this > system? first time I have tried files of this size - but I get the same problem no matter what compression utility I use; tried gzip, bzip2, rzip and compress.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070504173308.GA22281>