Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Mar 2014 07:12:01 -0500
From:      Karl Denninger <karl@denninger.net>
To:        Joar Jegleim <joar.jegleim@gmail.com>, freebsd-fs@freebsd.org
Subject:   Re: zfs l2arc warmup
Message-ID:  <53356711.8010509@denninger.net>
In-Reply-To: <CAFfb-hr=wR6nxqL%2B4tn-y2eQEw4n_g7rZoK9rRLnm_Ldcm1TZQ@mail.gmail.com>
References:  <CAFfb-hpi20062%2BHCrSVhey1hVk9TAcOZAWgHSAP93RSov3sx4A@mail.gmail.com> <CALfReydi_29L5tVe1P-aiFnm_0T4JJt72Z1zKouuj8cjHLKhnw@mail.gmail.com> <CAFfb-hpZos5-d3xo8snU1aVER5u=dSFRx-B-oqjFRTkT83w0Kg@mail.gmail.com> <20140328005911.GA30665@neutralgood.org> <CAFfb-hr=wR6nxqL%2B4tn-y2eQEw4n_g7rZoK9rRLnm_Ldcm1TZQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

[-- Attachment #1 --]

On 3/28/2014 4:23 AM, Joar Jegleim wrote:
> On 28 March 2014 01:59,  <kpneal@pobox.com> wrote:
>> On Thu, Mar 27, 2014 at 11:10:48AM +0100, Joar Jegleim wrote:
>>> But it's really not a problem for me how long it takes to warm up the
>>> l2arc, if it takes a week that's ok. After all I don't plan on
>>> reboot'ing this setup very often + I have 2 servers so I have the
>>> option to let the server warmup until i hook it into production again
>>> after maintenance / patch upgrade and so on .
>>>
>>> I'm just curious about wether or not the l2arc warmup itself, or if I
>>> would have to do that manual rsync to force l2arc warmup.
>> Have you measured the difference in performance between a cold L2ARC and
>> a warm one? Even better, have you measured the performance with a cold
>> L2ARC to see if it meets your performance needs?
> No I haven't.
> I actually started using those 2 ssd's for l2arc the day before I sent
> out this mail to the list .
> I haven't done this the 'right' way by producing some numbers for
> measurement, but I do know that the way this application work today is
> that it will pull random jpegs from this dataset of about 1.6TB,
> consisting of lots of millions of files ( more than 20 million). And
> that today this pool is served from 20 SATA 7.2K disks which would be
> the slowest solution for random read access.
> Based on the huge performance gain by using ssd's simply by looking at
> the spec., but also by looking at other peoples graphs from the net (
> people who have done this more thorough than me) I'm pretty confident
> to say that if at any time when the application request a jpeg if it
> was served from either ram or ssd it would be a substantial
> performance gain compared from serving it from the 7.2k array of
> disks.
No, the simplest solution is IMHO to stop trying to RAM-back a 1.6TB 
data set through various machinations.

A cache is just that -- a cache.  It's purpose is to make *frequently 
accessed* data more-quickly available to an application.  You have the 
antithesis of cachable data in that you have a pure random access model 
with no predictive or "frequently used" means to determine what is 
likely to be requested next.

IMHO the best and cheapest way to serve that data is to eliminate 
rotational and positioning latency from the data path.  If it is a 
read-nearly-always (or read only) data set then redundancy is only 
necessary to prevent downtime (not data loss) since it be easily backed up.

For the model you describe I would buy however many SSD disks were 
necessary to store said data set, design a means to back it up reliably 
and be done with it.

Backing the data store with L2ARC (and the RAM to manage it) is likely 
self-defeating as you not only are paying for BOTH the spinning rust AND 
the SSDs but you have doubled the number of devices that can fail and 
interrupt service.

-- 
-- Karl
karl@denninger.net


[-- Attachment #2 --]
0	*H
010	+0	*H
O0K030
	*H
010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0
130824190344Z
180823190344Z0[10	UUS10UFlorida10UKarl Denninger1!0	*H
	karl@denninger.net0"0
	*H
0
bi՞]MNԿawx?`)'ҴcWgR@BlWh+	u}ApdCFJVй~FOL}EW^bچYp3K&ׂ(R
lxڝ.xz?6&nsJ+1v9v/(kqĪp[vjcK%fϻe?iq]z
lyzFO'ppdX//Lw(3JIA*S#՟H[f|CGqJKooy.oEuOw$/섀$삻J9b|AP~8]D1YI<"""Y^T2iQ2b	yH)]	Ƶ0y$_N6XqMC 9՘	XgώjGTP"#nˋ"Bk100	U00	`HB0U0,	`HB
OpenSSL Generated Certificate0U|8˴d[20U#0]Af4U3x&^"408	`HB+)https://cudasystems.net:11443/revoked.crl0
	*H
gBwH]j\x`(&gW32"Uf^.^Iϱ
k!DQAg{(w/)\N'[oRW@CHO>)XrTNɘ!u`xt5(=f\-l3<@C6mnhv##1ŃbH͍_Nq
aʷ?rk$^9TIa!kh,D-ct1
00010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0	+;0	*H
	1	*H
0	*H
	1
140328121201Z0#	*H
	1IRDP!Vؑ4W0l	*H
	1_0]0	`He*0	`He0
*H
0*H
0
*H
@0+0
*H
(0	+710010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0*H
	1010	UUS10UFlorida10U	Niceville10U
Cuda Systems LLC10UCuda Systems LLC CA1/0-	*H
	 customer-service@cudasystems.net0
	*H
wYitMсx,
-=;1Ϡd0ONZBVu8nVe24ohh&ΫhcJnL9/uAIrNQ#XZZ(͈ȔTGlci}_'IQOSl	tHD<T:Ot(q聅[zE'ne-O:LBr8)֝ձ 1J+IZr
r0xJ"칲?(1>
nLy0W6`vb\Ύ#ρsrM	քi
vܙ9RP[W.K`~~EKVca<WrN'#.h"}ơ;XTX{\LzF~}j$`mKv<'4 ۾5\\?f!v򢐣9uO" 8h*\ K(eVb#o^嬙yV+ԿǂM3C

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?53356711.8010509>