Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 7 Mar 2011 19:41:10 +0100
From:      Matthias Gamsjager <mgamsjager@gmail.com>
To:        Joshua Boyd <boydjd@jbip.net>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: kmem_map too small with ZFS and 8.2-RELEASE
Message-ID:  <AANLkTin_aFNercpcBsg41OO3BXL_mYUNn%2BhjncXF14s2@mail.gmail.com>
In-Reply-To: <AANLkTin0eZ1_0n1VNYChNTOG6HDayegjHjGeGHso4PMY@mail.gmail.com>
References:  <1299232133.18671.3.camel@pc286.embl.fr> <20110304100517.GA23249@icarus.home.lan> <AANLkTikQiTi25TR6uDD2umRZQrOL8YZzEC960oWf4wax@mail.gmail.com> <20110304105608.GA23887@icarus.home.lan> <AANLkTimrdnxsxUpmZnr3=w5J8_46ZM91crEfY6c_ZR4z@mail.gmail.com> <20110306090455.GA87055@icarus.home.lan> <AANLkTin0eZ1_0n1VNYChNTOG6HDayegjHjGeGHso4PMY@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Let me too backup my claim with data:

AMD Dual core 4G ram 4x 1TB Samsung drives OS installed on separate ufs disk

FreeBSD fb 8.2-STABLE FreeBSD 8.2-STABLE #0 r219265: Fri Mar  4 16:47:35 CET
2011


loader.conf:
vm.kmem_size="6G"
vfs.zfs.txg.timeout="5"
vfs.zfs.vdev.min_pending=1 #default = 4
vfs.zfs.vdev.max_pending=4 #default= 35

sysctl.conf:
vfs.zfs.txg.write_limit_override=805306368
kern.sched.preempt_thresh=220

Zpool:
NAME        STATE     READ WRITE CKSUM
 storage     ONLINE       0     0     0
  mirror    ONLINE       0     0     0
    ad6     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
  mirror    ONLINE       0     0     0
    ad4     ONLINE       0     0     0
    ad8     ONLINE       0     0     0

NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
storage  1.81T  1.57T   245G    86%  ONLINE  -

Prefetch disable = 1
Version  1.96       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
fb           10000M    54  74 99180  42 35955  14   140  73 68174  11 180.6
  4
Latency               295ms    1581ms    1064ms     428ms   58640us
755ms
Version  1.96       ------Sequential Create------ --------Random
Create--------
fb                  -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
                 16  6697  39 +++++ +++ 11798  74 10060  61 +++++ +++ 11104
 72
Latency               213ms     134us     257us   32866us    2672us
174us

Prefetch disable = 0
Version  1.96       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
fb           10000M    52  74 107602  46 65443  29   135  74 243760  42
186.5   4
Latency               214ms     865ms    1525ms   79771us     254ms
924ms
Version  1.96       ------Sequential Create------ --------Random
Create--------
fb                  -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
                 16  8152  56 +++++ +++  4534  36 10966  69 32607  74  9692
 71
Latency               112ms   21108us     169ms   30018us    4097us
318us

Read performance 68MB/s vs 243MB/s.

Maybe the kind of workload you have does not work well with prefetch, I
don't know, but for sequential load like I use my NAS for, which is used as
a media tank, it does boost performance quiet a bit.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTin_aFNercpcBsg41OO3BXL_mYUNn%2BhjncXF14s2>