Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 5 Mar 2014 09:28:49 +0000
From:      krad <kraduk@gmail.com>
To:        Olav Gjerde <olav@backupbay.com>
Cc:        FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE?
Message-ID:  <CALfReyc6hwc13TFc8AGZbVvyK2enfEjhJ8ndAAQ6p7pWzg-QCg@mail.gmail.com>
In-Reply-To: <CAJ7kQyEp208XKt3CaiBufiB%2Bg_CHAkUgzAzVdX_6Gx2WyW1ENg@mail.gmail.com>
References:  <CAJ7kQyGTOuynOoLukXbP2E6GPKRiBWx8_mLEchk90WDKO%2Bo-SA@mail.gmail.com> <53157CC2.8080107@FreeBSD.org> <CAJ7kQyGQjf_WbY64bLVX=YfmJUfAd8i22kVbVhZhEWPMg7bbQw@mail.gmail.com> <5315D446.3040701@freebsd.org> <CAJ7kQyFf19Un_TS=kW=T21HT%2BoabhsUhJij5oixQ2_uh0LvHRA@mail.gmail.com> <alpine.GSO.2.01.1403042037290.1717@freddy.simplesystems.org> <CAJ7kQyEp208XKt3CaiBufiB%2Bg_CHAkUgzAzVdX_6Gx2WyW1ENg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
I thought the recordsize referred to the maximum block size rather than the
actual block size. Please correct me if im wrong


On 5 March 2014 07:17, Olav Gjerde <olav@backupbay.com> wrote:

> Currently I've set the recordsize to 8k, however I'm thinking maybe a
> recordsize of 4k may more optimal?
> This is because the compressratio with LZ4 is around 2.5 and this value h=
as
> been constant for all my data while growing from a few megabytes to a
> tenfold of gigabytes.
> Maybe something I should play with to see if it makes a difference.
>
>
> On Wed, Mar 5, 2014 at 3:40 AM, Bob Friesenhahn <
> bfriesen@simple.dallas.tx.us> wrote:
>
> > On Tue, 4 Mar 2014, Olav Gjerde wrote:
> >
> >  I managed to mess up who I replied to and Matthew replied back with a
> good
> >> answer which I think didn't reach the mailing list.
> >>
> >> I actually have a problem with query performance in one of my database=
s
> >> related to running PostgreSQL on ZFS. Which is why I'm so interested i=
n
> >> compression for the L2ARC Cache. The problem is random IO read were
> >> creating a report were I aggregate 75000 rows takes 30 minutes!!! The
> >> table
> >> that I query has 400 million rows though.
> >> The dataset easily fit in memory, so if I run the same query again it
> >> takes
> >> less than a second.
> >>
> >
> > Make sure that your database is on a filesystem with zfs block-size
> > matching the database block-size (rather than 128K).  Otherwise far mor=
e
> > data may be read than needed, and likewise, writes may result in writin=
g
> > far more data than needed.
> >
> > Regardless, L2ARC on SSD is a very good idea for this case.
> >
> > Bob
> > --
> > Bob Friesenhahn
> > bfriesen@simple.dallas.tx.us,
> http://www.simplesystems.org/users/bfriesen/
> > GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
> >
>
>
>
> --
> Olav Gr=F8n=E5s Gjerde
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALfReyc6hwc13TFc8AGZbVvyK2enfEjhJ8ndAAQ6p7pWzg-QCg>