Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 06 Jun 2009 19:16:59 +0200
From:      Kai Gallasch <gallasch@free.de>
To:        freebsd-fs@freebsd.org
Subject:   ZFS v13 performance drops with low memory on FreeBSD-7 STABLE
Message-ID:  <4A2AA48B.20803@free.de>

next in thread | raw e-mail | index | archive | help
Hi.

I upgraded a server with 7-STABLE-amd64 and the MFC'd ZFS v13 about 8
days ago. Since then the machine is running stable and this without
manually tuning vm.kmem_size, vfs.zfs.arc, etc. in loader.conf - so far
so good :-)

In the last few days I noticed some performance issues with zfs, as some
customers complained about slow mysql database responses.

MySQL is running in a database jail on a zfs v13 zpool, websites using
the mysql database are also running on zfs on the same server.

The server is running about 30 in production jails, has 16GB RAM and 8GB
swap. Swap usage is about only 1% currently.

After debugging the mysql settings for a while I found out, that when I
stopped some processes on the server that were using high amounts of
RAM, the datbase response times for queries were almost back to normal
again..

So for me this looks like when running applications and ZFS compete for
free RAM, ZFS looses. Is that so?

Is there anything I can do (besides buying more RAM :) to help ZFS to
secure it's share of RAM, to prevent a performance drop?

I was thinking about setting vm.kmem_size_min to about 2 GB, would that
help zfs performance?

BTW: Are the zfs related sysctls documented somewhere?

--Kai.


# /root/kmem.sh
TEXT=10170727, 9.69956 MB
DATA=1091940352, 1041.36 MB
TOTAL=1102111079, 1051.06 MB

I find the following zfs related sysctl values:

vm.kmem_size_scale: 3
vm.kmem_size_max: 329853485875
vm.kmem_size_min: 0
vm.kmem_size: 5496406016

kern.maxvnodes: 200000
kern.minvnodes: 25000
vfs.freevnodes: 25004
vfs.wantfreevnodes: 25000
vfs.numvnodes: 170965

vfs.zfs.arc_meta_limit: 1105666048
vfs.zfs.arc_meta_used: 598675456
vfs.zfs.mdcomp_disable: 0
vfs.zfs.arc_min: 552833024
vfs.zfs.arc_max: 4422664192
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 0
vfs.zfs.recover: 0
vfs.zfs.txg.synctime: 5
vfs.zfs.txg.timeout: 30
vfs.zfs.scrub_limit: 10
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 35
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_disable: 0
vfs.zfs.version.zpl: 3
vfs.zfs.version.vdev_boot: 1
vfs.zfs.version.spa: 13
vfs.zfs.version.dmu_backup_stream: 1
vfs.zfs.version.dmu_backup_header: 2
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
kstat.zfs.misc.arcstats.hits: 1145784907
kstat.zfs.misc.arcstats.misses: 111745603
kstat.zfs.misc.arcstats.demand_data_hits: 824346468
kstat.zfs.misc.arcstats.demand_data_misses: 44758436
kstat.zfs.misc.arcstats.demand_metadata_hits: 239559360
kstat.zfs.misc.arcstats.demand_metadata_misses: 26547668
kstat.zfs.misc.arcstats.prefetch_data_hits: 12999868
kstat.zfs.misc.arcstats.prefetch_data_misses: 21907841
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 68879211
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 18531658
kstat.zfs.misc.arcstats.mru_hits: 220554732
kstat.zfs.misc.arcstats.mru_ghost_hits: 24332697
kstat.zfs.misc.arcstats.mfu_hits: 847474912
kstat.zfs.misc.arcstats.mfu_ghost_hits: 26834361
kstat.zfs.misc.arcstats.deleted: 62523518
kstat.zfs.misc.arcstats.recycle_miss: 52718050
kstat.zfs.misc.arcstats.mutex_miss: 450373
kstat.zfs.misc.arcstats.evict_skip: 2822045644
kstat.zfs.misc.arcstats.hash_elements: 80450
kstat.zfs.misc.arcstats.hash_elements_max: 934929
kstat.zfs.misc.arcstats.hash_collisions: 25344131
kstat.zfs.misc.arcstats.hash_chains: 10124
kstat.zfs.misc.arcstats.hash_chain_max: 14
kstat.zfs.misc.arcstats.p: 863165963
kstat.zfs.misc.arcstats.c: 1044841750
kstat.zfs.misc.arcstats.c_min: 552833024
kstat.zfs.misc.arcstats.c_max: 4422664192
kstat.zfs.misc.arcstats.size: 1044917760
kstat.zfs.misc.arcstats.hdr_size: 18033120
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 379
kstat.zfs.misc.vdev_cache_stats.delegations: 21285135
kstat.zfs.misc.vdev_cache_stats.hits: 41347938
kstat.zfs.misc.vdev_cache_stats.misses: 33373407





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4A2AA48B.20803>