Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 07 Mar 2006 14:12:10 +0000
From:      Alex Zbyslaw <xfb52@dial.pipex.com>
To:        Bart Silverstrim <bsilver@chrononomicon.com>
Cc:        freebsd-questions Questions list <freebsd-questions@freebsd.org>
Subject:   Re: awk question
Message-ID:  <440D94BA.7040400@dial.pipex.com>
In-Reply-To: <551fa2ce1b8832dd3370d0e781c5b301@chrononomicon.com>
References:  <75a11e816bee8f2664ae1ccbd618dca7@athensasd.org>	<cce506b0603061345n4de96301sd9b8a8dd17deeac1@mail.gmail.com> <551fa2ce1b8832dd3370d0e781c5b301@chrononomicon.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Bart Silverstrim wrote:

>
> On Mar 6, 2006, at 4:45 PM, Noel Jones wrote:
>
>> On 3/6/06, Bart Silverstrim <bsilverstrim@athensasd.org> wrote:
>>
>>> I'm totally drawing a blank on where to start out on this.
>>>
>>> If I have a list of URLs like
>>> http://www.happymountain.com/archive/digest.gif
>>>
>>> How could I use Awk or Sed to strip everything after the .com?  Or is=

>>> there a "better" way to do it? =20
>>
>>
>>     | cut -d / -f 1-3
>
>
> Oh boy was that one easy.  It was a BAD mental hiccup.
>
> I'll add a sort and uniq and it should be all ready to go. =10Thanks!
>

More than one way to skin that cat!  cut is nice'n'easy but since you=20
asked about awk and sed, these would work too:


awk -F/ 'NF > 2 {printf "%s//%s\n",  $1, $3}'

or

sed 's,^\([^/]*://[^/]*\).*,\1,'

--Alex





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?440D94BA.7040400>