Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 14 Sep 2007 10:37:52 +0200
From:      Jonathan McKeown <jonathan+freebsd-questions@hst.org.za>
To:        Steve Bertrand <iaccounts@ibctech.ca>
Cc:        Kurt Buff <kurt.buff@gmail.com>, freebsd-questions@freebsd.org
Subject:   Re: Scripting question
Message-ID:  <200709141037.53071.jonathan%2Bfreebsd-questions@hst.org.za>
In-Reply-To: <46EA3B6C.7050200@ibctech.ca>
References:  <a9f4a3860709131016w54c12b6fy94fc2b0f286aea3d@mail.gmail.com> <200709140930.21142.jonathan%2Bfreebsd-questions@hst.org.za> <46EA3B6C.7050200@ibctech.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On Friday 14 September 2007 09:42, Steve Bertrand wrote:
> >>> I don't have the perl skills, though that would be ideal.
>
> -- snip --
>
> > Another approach in Perl would be:
> >
> > #!/usr/bin/perl
> > my (%names, %dups);
> > while (<>) {
> >     my ($key) = split;
> >     $dups{$key} = 1 if $names{$key};
> >     $names{$key} = 1;
> > }
> > delete @names{keys %dups};

> I don't know if this is completely relevant, but it appears as though it
>  may help.
>
> Bob Showalter once advised me on the Perl Beginners list as such,
> quoted, but snipped for clarity:
>
> see "perldoc -q duplicate" If the array elements can
> be compared with string semantics (as you are doing here), the following
> will work:
>
>    my @array = do { my %seen; grep !$seen{$_}++, @clean };

The problem with this is that it leaves you with one copy of each duplicated 
item: the requirement was to remove all copies of duplicated items and return 
only the non-repeated items.

Jonathan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200709141037.53071.jonathan%2Bfreebsd-questions>