Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 4 Mar 2014 20:40:37 -0600 (CST)
From:      Bob Friesenhahn <bfriesen@simple.dallas.tx.us>
To:        Olav Gjerde <olav@backupbay.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE?
Message-ID:  <alpine.GSO.2.01.1403042037290.1717@freddy.simplesystems.org>
In-Reply-To: <CAJ7kQyFf19Un_TS=kW=T21HT%2BoabhsUhJij5oixQ2_uh0LvHRA@mail.gmail.com>
References:  <CAJ7kQyGTOuynOoLukXbP2E6GPKRiBWx8_mLEchk90WDKO%2Bo-SA@mail.gmail.com> <53157CC2.8080107@FreeBSD.org> <CAJ7kQyGQjf_WbY64bLVX=YfmJUfAd8i22kVbVhZhEWPMg7bbQw@mail.gmail.com> <5315D446.3040701@freebsd.org> <CAJ7kQyFf19Un_TS=kW=T21HT%2BoabhsUhJij5oixQ2_uh0LvHRA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 4 Mar 2014, Olav Gjerde wrote:

> I managed to mess up who I replied to and Matthew replied back with a good
> answer which I think didn't reach the mailing list.
>
> I actually have a problem with query performance in one of my databases
> related to running PostgreSQL on ZFS. Which is why I'm so interested in
> compression for the L2ARC Cache. The problem is random IO read were
> creating a report were I aggregate 75000 rows takes 30 minutes!!! The table
> that I query has 400 million rows though.
> The dataset easily fit in memory, so if I run the same query again it takes
> less than a second.

Make sure that your database is on a filesystem with zfs block-size 
matching the database block-size (rather than 128K).  Otherwise far 
more data may be read than needed, and likewise, writes may result in 
writing far more data than needed.

Regardless, L2ARC on SSD is a very good idea for this case.

Bob
-- 
Bob Friesenhahn
bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.GSO.2.01.1403042037290.1717>