Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 8 Apr 2009 23:18:51 -0400
From:      Hussain Ali <hali@datapipe.net>
To:        <freebsd-fs@freebsd.org>
Subject:   Re: ZFSKnownProblems - needs revision?
Message-ID:  <20090409031851.GE6052@datapipe.com>

index | next in thread | raw e-mail

> Ivan Voras wrote:
>
>
>>* Are the issues on the list still there?
>>* Are there any new issues?
>>* Is somebody running ZFS in production (non-trivial loads) with
>>success? What architecture / RAM / load / applications used?
>>* How is your memory load? (does it leave enough memory for other
>>services)

I have a storage server its constantly heavy writing and reading (at
times)  though not with high concurrency:

# df -h
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/ufs/rootfs     19G    395M     17G     2%    /
devfs              1.0K    1.0K      0B   100%    /dev
/dev/ufs/tmp       4.8G     20K    4.5G     0%    /tmp
/dev/ufs/usr        19G    2.9G     15G    16%    /usr
/dev/ufs/var        15G    7.5G    5.8G    56%    /var
backupstorage       94T     80T     13T    86%    /backupstorage


# cat /etc/sysctl.conf
# $Id: sysctl.conf,v 1.3 2009/04/09 03:06:31 hali Exp root $

security.bsd.see_other_uids=0
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.icmplim=50
net.inet.tcp.sendspace=524288
net.inet.tcp.recvspace=524288
net.inet.ip.intr_queue_maxlen=2048
net.inet.ip.intr_queue_drops=4096
kern.ipc.maxsockbuf=2097152
kern.ipc.somaxconn=8096
kern.maxfiles=443808
vfs.hirunningspace=4194304
vfs.ufs.dirhash_maxmem=4194304
vfs.lookup_shared=1

# cat /boot/loader.conf
# $Id: loader.conf,v 1.4 2009/04/09 03:07:40 hali Exp root $
isp_load="YES"
ispfw_load="YES"
isp_2400_load="NO"
vm.kmem_size_max="1073741824"
vm.kmem_size="1073741824"
vfs.zfs.prefetch_disable=1
vfs.zfs.arc_max="786M"
kern.maxvnodes="50000

# zpool iostat 3
                  capacity     operations    bandwidth
pool            used  avail   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
backupstorage  80.4T  14.9T      2    355   186K  38.7M
backupstorage  80.4T  14.9T      0    316      0  31.7M
backupstorage  80.4T  14.9T      0     99      0  12.2M
backupstorage  80.4T  14.9T      0    164      0  15.7M
backupstorage  80.4T  14.9T      0    225      0  22.0M

I have another another in another dc, but less capacity:

backupstorage              56T     32T     24T    57%    /backupstorage

Both are the following:


HP ProLiant DL385 G2
8GB RAM
dual dual core AMD 2.2Ghz cpus
3 x Nexsan SataBEAST for the san.


Inbound about 900Mb/s,  uptime has generally been 3-4 months before
increasing the ZFS arc/KVM sizes. I should just max it out but its
relatively stable. Am looking for ZFSv8+ for L2Arc and separate ZIL.
Load averages about ~ 1.0 . My wish list would be KVM support fot 64GB
ARC, fusion-IO driver support for the ZIL, version 8 of ZFS in FreeBSD
7.2, active multipath, etc, etc..

It works, its stable, its production, but its not like i am cvsuping
ports  tree 100 times concurrently.

--
-hussain

This message may contain confidential or privileged information.  If you are not the intended recipient, please advise us immediately and delete this message.  See http://www.datapipe.com/emaildisclaimer.aspx for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you.


help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090409031851.GE6052>