Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Oct 2007 18:32:31 +0300
From:      Giorgos Keramidas <keramida@ceid.upatras.gr>
To:        Steve Bertrand <iaccounts@ibctech.ca>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Managing very large files
Message-ID:  <20071004153231.GB6868@kobe.laptop>
In-Reply-To: <4704DFF3.9040200@ibctech.ca>
References:  <4704DFF3.9040200@ibctech.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2007-10-04 08:43, Steve Bertrand <iaccounts@ibctech.ca> wrote:
> Hi all,
> I've got a 28GB tcpdump capture file that I need to (hopefully) break
> down into a series of 100,000k lines or so, hopefully without the need
> of reading the entire file all at once.
> 
> I need to run a few Perl processes on the data in the file, but AFAICT,
> doing so on the entire original file is asking for trouble.
> 
> Is there any way to accomplish this, preferably with the ability to
> incrementally name each newly created file?

Depending on whether you want to capture only specific parts of the dump
in the 'split output', you may have luck with something like:

	tcpdump -r input.pcap -w output.pcap 'filter rules here'

This will read the file sequentially, which can be slower than having it
all in memory, but with a huge file like this it is probably a good idea :)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071004153231.GB6868>