Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Oct 2007 22:37:54 +0200
From:      Mel <fbsd.questions@rachie.is-a-geek.net>
To:        freebsd-questions@freebsd.org
Subject:   Re: Managing very large files
Message-ID:  <200710042237.57712.fbsd.questions@rachie.is-a-geek.net>
In-Reply-To: <47054A1D.2000701@ibctech.ca>
References:  <4704DFF3.9040200@ibctech.ca> <20071003200013.GD45244@demeter.hydra> <47054A1D.2000701@ibctech.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thursday 04 October 2007 22:16:29 Steve Bertrand wrote:
> >> man 1 split
> >>
> >> (esp. -l)
> >
> > That's probably the best option for a one-shot deal like this.  On the
> > other hand, Perl itself provides the ability to go through a file one
> > line at a time, so you could just read a line, operate, write a line (to
> > a new file) as needed, over and over, until you get through the whole
> > file.
> >
> > The real problem would be reading the whole file into a variable (or even
> > multiple variables) at once.
>
> This is what I am afraid of. Just out of curiosity, if I did try to read
> the entire file into a Perl variable all at once, would the box panic,
> or as the saying goes 'what could possibly go wrong'?

There's probably a reason why you want to process that file - splitting it can 
be a problem if you need to keep track of some states and it splits on the 
wrong line. So, I'd probably open it in perl (or whatever processor) directly 
and use a database for storage if I really need to keep string contexts, so 
that on each line iteration my perl memory is clean.

-- 
Mel



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200710042237.57712.fbsd.questions>