Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 10 May 2013 13:41:37 +0300
From:      Daniel Kalchev <daniel@digsys.bg>
To:        "Ronald Klop" <ronald-freebsd8@klop.yi.org>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: recommended memory for zfs
Message-ID:  <60A0A396-2E56-488F-BB5A-6EDD51B8039D@digsys.bg>
In-Reply-To: <op.wwu433zi8527sy@ronaldradial>
References:  <518BA237.3030700@gmail.com> <518C450B.5070809@ShaneWare.Biz> <518C51AF.5050609@gmail.com> <20130510020628.GA98750@icarus.home.lan> <518C5902.5050909@gmail.com> <op.wwu433zi8527sy@ronaldradial>

next in thread | previous in thread | raw e-mail | index | archive | help

On May 10, 2013, at 1:08 PM, "Ronald Klop" <ronald-freebsd8@klop.yi.org> =
wrote:

> On Fri, 10 May 2013 04:18:42 +0200, Benjamin Adams =
<benjamindadams@gmail.com> wrote:
>=20
>> On 05/09/2013 10:06 PM, Jeremy Chadwick wrote:
>>> On Thu, May 09, 2013 at 09:47:27PM -0400, Benjamin Adams wrote:
>>>> On 05/09/2013 08:53 PM, Shane Ambler wrote:
>>>>> On 09/05/2013 22:48, Benjamin Adams wrote:
>>>>>> Hello zfs question about memory.
>>>>>> I heard zfs is very ram hungry.
>>>>>> Service looking to run:
>>>>>> - nginx
>>>>>> - postgres
>>>>>> - php-fpm
>>>>>> - python
>>>>>>=20
>>>>>> I have a machine with two quad core cpus but only 4 G Memory
>>>>>>=20
>>>>>> I'm looking to buy more ram now.
>>>>>> What would be the recommend amount of memory for zfs across 6 =
drives on
>>>>>> this setup?
>>>>>>=20
>>>>> I believe I heard a calculation of 1GB cache per 1TB of disk. But
>>>>> basically zfs will use all free ram available if you access that
>>>>> much data from disk. You will want to set vfs.zfs.arc_max to allow
>>>>> enough ram for your apps to work in.
>>>>>=20
>>>>> If you consider the files for your website and the data you store
>>>>> you may find that you would never fill more than 500MB of cache.
>>>>>=20
>>>>> If you will be serving large media files that will easily use up
>>>>> the cache you could give them their own filesystem that only
>>>>> caches metadata - zfs set primarycache=3Dmetadata zroot/mediafiles
>>>>>=20
>>>>>=20
>>>> Thanks for all the replies  Size of DB and HD's are:
>>>>=20
>>>> Current DB Size =3D 23 GB
>>>> HD sizes =3D (6) 500 GB drives
>>> Nobody is going to be able to give you a precise/accurate =
recommendation
>>> given the lack of detail provided, I'm sorry to say.  What's the RES
>>> size of nginx (all processes combined)?  What's the RES size of
>>> postgres (same)?  Do you have PHP scripts that "run amok" for long
>>> periods of time and take up lots of RAM?  Same with python?  How =
many
>>> concurrent visitors and what sort of content are you hosting?  Do =
you
>>> maintain/write your own PHP/Python code or are you using some crap =
like
>>> Wordpress?
>>>=20
>>> This is just a **small** list of questions -- and what may come as a
>>> shock is that I do not expect you to provide answers to any of them.
>>> They are questions that you should, for yourself, attempt to answer =
and
>>> work out what you need from there ("teach a man to fish" and all =
that).
>>>=20
>>> The advice of "1GB of RAM per 1TB of disk space" is absolute =
nonsense on
>>> numerous levels -- whoever gave this advice to Shane either has no
>>> understanding of how filesystems/ZFS works, or does but chose to
>>> simplify to the point where they're providing half-ass information.
>>> There is no direct, or even indirect, correlation between disk =
capacity
>>> and ZFS ARC size -- what matter is your "working set" (to quote =
Tom).
>>> You need to have some idea of how much disk I/O you're doing, and =
what
>>> type of I/O (sequential or random).
>>>=20
>>> If you want my general advice, Benjamin, it's this: get yourself a
>>> system with *minimum* 8GB of RAM but has the physical possibility of
>>> supporting more (and only add more RAM when/if you know you need =
it); do
>>> not bother with ZFS on a system with 4GB.  Run amd64, not i386 (I =
don't
>>> recommend bothering with ZFS on i386 -- I am not going to get into a
>>> discussion about this either).  Run stable/9, not 9.1-RELEASE.  =
Avoid
>>> compression and dedup.  And test disk failures as well (don't get =
caught
>>> with your pants down later).
>>>=20
>>> The above advice comes from someone who did hosting (web/ssh/etc.) =
for
>>> almost 20 years with KISS principle applied at all levels.  YMMV =
though,
>>> depending on what all you're doing/what you truly need.
>>>=20
>>> Good luck.
>>>=20
>> Jeremy,
>>=20
>> Was just see if I should just get raid controller and more ram down =
the road.
>> List of priorities.
>>=20
>> Main thing is I move from BSD when 9.0 came out.  Was looking to see =
if zfs is included in the installer.  Now.
>>=20
>> Sum up:
>> upgrade ram to 16GB (not 64 like plained)
>> and  raid controller that supports level 5.
>>=20
>=20
> Let ZFS do the RAID stuff. Do not use a RAID controller, but give the =
plain disks to ZFS. Some of the nice features come from ZFS doing the =
RAID stuff.

To paraphrase this.

Get yourself a nice HBA. non-RAID! For example, something based on =
LSI2008 with IT firmware. With few enough discs you can avoid using SAS =
expanders as well. RAID controllers, in addition to causing all kinds of =
troubles are unlikely to have sufficient bandwidth and might turn out to =
be the bottleneck (unless you are prepared to spend an unholy amount of =
money -- which are better spent for RAM and CPU).

If you want performance and low latency, avoid using compression and =
dedup in ZFS. Set your record size appropriately for postgresql (8k) =
*before* you run initdb. It is best to create an separate filesystem for =
the database and set that property only there. If your database is heavy =
on updates, you might be interested to use an SSD for ZIL. In general, =
if you can afford it, an cheap SSD for L2ARC might do wonders -- if your =
data set can fit there. If you intend to use SSDs and want the best =
performance, use different SSDs for ZIL and L2ARC. The first needs to be =
fast at writing and optimised for his (for example, the OCZ Vector) -- =
it does not need to be large, at all. The second has to provide high =
read throughput and IOPS, not so much for writing.

Lately, I am building more and more servers with SSDs only and ZFS and =
performance has been incomparable with spinning drives.

Also, check what the sector size on your drives is. Most these days are =
4k already and you need to tell ZFS that (because the drives lie). It is =
safer to plan for 4k drives, as future replacements are likely to be of =
that kind. The same about SSDs.=20

Daniel=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?60A0A396-2E56-488F-BB5A-6EDD51B8039D>