Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 27 Mar 2014 11:06:02 +0100
From:      Joar Jegleim <joar.jegleim@gmail.com>
To:        Ronald Klop <ronald-lists@klop.ws>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: zfs l2arc warmup
Message-ID:  <CAFfb-hrV-onrmfxATG3r0bYsoLQcrRCt49rAa_qr2ermvA9t9g@mail.gmail.com>
In-Reply-To: <op.xddh1xf3kndu52@ronaldradial.radialsg.local>
References:  <CAFfb-hpi20062%2BHCrSVhey1hVk9TAcOZAWgHSAP93RSov3sx4A@mail.gmail.com> <op.xddh1xf3kndu52@ronaldradial.radialsg.local>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

thnx for your input.

The current setup :
2 X HP Proliant DL380 G7, 2xXeon (six core)@2667Mhz, 144GB DDR3
@1333Mhz (ecc, registered)
Each server has an external shelf with 20 1TB SATA ('sas midline') 7200RPM disks
The shelf is connected via a Smart Array P410i with 1GB cache .

The second server is failover, I use zfs send/receive for replication
( with mbuffer). I had HAST in there for a couple months but got cold
feet after some problems + I hate to have an expensive server just
'sitting there'. We will in near future start serving jpeg's from both
servers which is a setup I like a lot more.

I've setup 20 single disk 'raid 0' logical disks in that P410i, and
built zfs mirror over those 20 disks ( raid 10) which give me about
~9TB of storage.

For the record, I initially used an LSI SAS 9207-4i4e SGL HBA to
connect the external shelf, but after some testing I realized I got
more performance out of the P410i with cache enabled.
I have dual power supplies as well as ups and want performance with
the risk it involves.

At the moment I have 2x Intel 520 480GB ssd's for l2arc, the plan is
to add 2 more ssd's to get ~ 2TB for l2arc, and add a small ssd for
log/zil

The pool got default recordsize (128k), atime=off and I've set
compression to lz4 and got a compressratio of 1.18x .

I've set the following sysctl's related to zfs:
# 100GB
vfs.zfs.arc_max=107374182400
# used to be 5(default) trying 1
vfs.zfs.txg.timeout="1"
# this to work with the raid ctrl cache
vfs.zfs.cache_flush_disable=1
vfs.zfs.write_limit_shift=9
vfs.zfs.txg.synctime_ms=200
# L2ARC tuning
# Maximum number of bytes written to l2arc per feed
# 8MB/s (actuall=vfs.zfs.l2arc_write_max*vfs.zfs.l2arc_feed_min_ms)
# so 8MB every 200ms = 40MB/s
vfs.zfs.l2arc_write_max=8388608
# Mostly only relevant at the first few hours after boot
# write_boost, speed to fill l2arc until it is filled (after boot)
# 70MB/s, same rule applys, multiply by 5
vfs.zfs.l2arc_write_boost=73400320
# Not sure
vfs.zfs.l2arc_headroom=2
# l2arc feeding period
vfs.zfs.l2arc_feed_secs=1
# minimum l2arc feeding period
vfs.zfs.l2arc_feed_min_ms=200
# control whether streaming data is cached or not
vfs.zfs.l2arc_noprefetch=1
# control whether feed_min_ms is used or not
vfs.zfs.l2arc_feed_again=1
# no read and write at the same time
vfs.zfs.l2arc_norw=1

> 2TB of l2arc?
> Why don't you put your data on SSD's, get rid of the l2arc and buy some
> extra RAM instead.
> Than you don't need any warm-up.
I'm considering this option, but today I have ~10TB of storage, and
need space for future growth + I like the idea that the l2arc may die
and I'll loose performance, not my data.
+ I reckon I'd have to use a lot more expensive ssd's if I was to use
them for main datastore, as l2arc I can use cheaper ssd's. Those intel
520's can deliver ~50 000 iops, and I need iops, not necessarily
bandwidth.
At least that's my understandig of this.
Open for input ! :)







On 27 March 2014 10:02, Ronald Klop <ronald-lists@klop.ws> wrote:
> On Thu, 27 Mar 2014 08:50:06 +0100, Joar Jegleim <joar.jegleim@gmail.com>
> wrote:
>
>> Hi list !
>>
>> I struggling to get a clear understanding of how the l2arc get warm (
>> zfs).
>> It's a FreeBSD 9.2-RELEASE server.
>>
>> From various forum I've come up with this which I have in my
>> /boot/loader.conf
>> # L2ARC tuning
>> # Maximum number of bytes written to l2arc per feed
>> # 8MB (actuall=vfs.zfs.l2arc_write_max*(1000 / vfs.zfs.l2arc_feed_min_ms))
>> # so 8MB every 200ms = 40MB/s
>> vfs.zfs.l2arc_write_max=8388608
>> # Mostly only relevant at the first few hours after boot
>> # write_boost, speed to fill l2arc until it is filled (after boot)
>> # 70MB, same rule applys, multiply by 5 = 350MB/s
>> vfs.zfs.l2arc_write_boost=73400320
>> # Not sure
>> vfs.zfs.l2arc_headroom=2
>> # l2arc feeding period
>> vfs.zfs.l2arc_feed_secs=1
>> # minimum l2arc feeding period
>> vfs.zfs.l2arc_feed_min_ms=200
>> # control whether streaming data is cached or not
>> vfs.zfs.l2arc_noprefetch=1
>> # control whether feed_min_ms is used or not
>> vfs.zfs.l2arc_feed_again=1
>> # no read and write at the same time
>> vfs.zfs.l2arc_norw=1
>>
>> But what I really wonder is how does the l2arc get warmed up ?
>> I'm thinking of 2 scenarios:
>>
>> a.: when arc is full, stuff that evict from arc is put over in l2arc,
>> that means that files in the fs that are never accessed will never end
>> up in l2arc, right ?
>>
>> b.: zfs run through fs in the background and fill up the l2arc for any
>> file, regardless if it has been accessed or not ( this is the
>> 'feature' I'd like )
>>
>> I suspect scenario a is what really happens, and if so, how does
>> people warmup the l2arc manually (?)
>> I figured that if I rsync everything from the pool I want to be
>> cache'ed, it will fill up the l2arc for me, which I'm doing right now.
>> But it takes 3-4 days to rsync the whole pool .
>>
>> Is this how 'you' do it to warmup the l2arc, or am I missing something ?
>>
>> The thing is with this particular pool is that it serves somewhere
>> between 20 -> 30 million jpegs for a website. The front page of the
>> site will for every reload present a mosaic of about 36 jpegs, and the
>> jpegs are completely randomly fetched from the pool.
>> I don't know what jpegs will be fetched at any given time, so I'm
>> installing about 2TB of l2arc ( the pool is about 1.6TB today) and I
>> want the whole pool to be available from the l2arc .
>>
>>
>> Any input on my 'rsync solution' to warmup the l2arc is much appreciated
>> :)
>
>
>
> 2TB of l2arc?
> Why don't you put your data on SSD's, get rid of the l2arc and buy some
> extra RAM instead.
> Than you don't need any warm-up.
>
> For future questions, please provide more details about your setup. What are
> disks, what ssds, how much RAM. How is your pool configured? Mirror, raidz,
> ... Things like that.
>
> Ronald.



-- 
----------------------
Joar Jegleim
Homepage: http://cosmicb.no
Linkedin: http://no.linkedin.com/in/joarjegleim
fb: http://www.facebook.com/joar.jegleim
AKA: CosmicB @Freenode

----------------------



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFfb-hrV-onrmfxATG3r0bYsoLQcrRCt49rAa_qr2ermvA9t9g>