Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 09 Aug 2019 12:44:57 +0000
From:      bugzilla-noreply@freebsd.org
To:        fs@FreeBSD.org
Subject:   [Bug 237807] ZFS: ZVOL writes fast, ZVOL reads abysmal...
Message-ID:  <bug-237807-3630-vlJsbWwBZE@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-237807-3630@https.bugs.freebsd.org/bugzilla/>
References:  <bug-237807-3630@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D237807

--- Comment #10 from Nils Beyer <nbe@renzel.net> ---
maybe I'm too supid, I don't know. I can't get the pool fast...

Created the pool from scratch. Updated to latest 12-STABLE. But reads from =
that
pool are still abysmal.

Current pool layout:
---------------------------------------------------------------------------=
-----
        NAME        STATE     READ WRITE CKSUM
        veeam-backups  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da0     ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da2     ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            da4     ONLINE       0     0     0
            da5     ONLINE       0     0     0
            da7     ONLINE       0     0     0
          raidz1-2  ONLINE       0     0     0
            da9     ONLINE       0     0     0
            da14    ONLINE       0     0     0
            da17    ONLINE       0     0     0
          raidz1-3  ONLINE       0     0     0
            da18    ONLINE       0     0     0
            da21    ONLINE       0     0     0
            da22    ONLINE       0     0     0
          raidz1-4  ONLINE       0     0     0
            da6     ONLINE       0     0     0
            da15    ONLINE       0     0     0
            da16    ONLINE       0     0     0
          raidz1-5  ONLINE       0     0     0
            da11    ONLINE       0     0     0
            da8     ONLINE       0     0     0
            da3     ONLINE       0     0     0
          raidz1-6  ONLINE       0     0     0
            da23    ONLINE       0     0     0
            da20    ONLINE       0     0     0
            da19    ONLINE       0     0     0
          raidz1-7  ONLINE       0     0     0
            da10    ONLINE       0     0     0
            da12    ONLINE       0     0     0
            da13    ONLINE       0     0     0

errors: No known data errors
---------------------------------------------------------------------------=
-----


used bonnie++:
---------------------------------------------------------------------------=
-----
Version  1.97       ------Sequential Output------ --Sequential Input- --Ran=
dom-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --See=
ks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec=
 %CP
veeambackups.local 64G   141  99 471829  59 122365  23     5   8 40084   8=
=20
1016  19
Latency             61947us     348ms     618ms    1634ms     105ms     190=
ms
Version  1.97       ------Sequential Create------ --------Random Create----=
----
veeambackups.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delet=
e--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec=
 %CP
                 16 +++++ +++ +++++ +++ 17378  58 +++++ +++ +++++ +++ 32079=
  99
Latency              2424us      44us     388ms    2295us      36us      91=
us
1.97,1.97,veeambackups.local,1,1565375578,64G,,141,99,471829,59,122365,23,5=
,8,40084,8,1016,19,16,,,,,+++++,+++,+++++,+++,17378,58,+++++,+++,+++++,+++,=
32079,99,61947us,348ms,618ms,1634ms,105ms,190ms,2424us,44us,388ms,2295us,36=
us,91us
---------------------------------------------------------------------------=
-----

tested locally. No iSCSI, no NFS.

"gstat" tells me that the harddisks are only 15% busy.
CPU load averages:  0.51,  0.47,  0.39

ZFS recordsize is default 128k.

Maybe too many top-level VDEVs?


Maybe the HBA sucks for ZFS? A simple parallel DD using:
---------------------------------------------------------------------------=
-----
for NR in `jot 24 0`; do
        dd if=3D/dev/da${NR} of=3D/dev/null bs=3D1M count=3D1k &
done
---------------------------------------------------------------------------=
-----
delivers 90MB/s for each of the 24 drives during the run which results in 9=
0*24
=3D 2160MB/s total. Should be plenty for the pool.


I'm really out of ideas apart from trying 13-CURRENT or FreeNAS or Linux or=
 or
or - which I'd like to avoid...

Needless to say that the read performances via NFS or iSCSI are still pathe=
tic
which makes the current setup unusable as a ESXi datastore and makes me afr=
aid
of future restore jobs in TB size ranges...

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-237807-3630-vlJsbWwBZE>