Date: Sat, 17 Sep 2011 22:14:52 +0400 From: Lytochkin Boris <lytboris@gmail.com> To: "David P. Discher" <dpd@bitgravity.com> Cc: freebsd-fs@freebsd.org Subject: Re: [ZFS] starving reads while idle disks Message-ID: <CAEJYa-TKjQtoR_39yDJVW8vxPDyLOMvBWsyvUVQ5gRzji%2BmSiQ@mail.gmail.com> In-Reply-To: <6B437FA4-B422-4BE7-BDF5-F90717F3865B@bitgravity.com> References: <CAEJYa-Si%2B4Tj5sj8fuxWfqjZgMX1cB8y=JWqJqe_F%2BR4Er9g_A@mail.gmail.com> <6B437FA4-B422-4BE7-BDF5-F90717F3865B@bitgravity.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi. > Do you see the same read-stravation when writing the tar to a file ? (possibly outside the zpool). Yep. > I have anecdotal suspicion that /dev/null has some performance hit of blocking or locking. No-no. Every program that tries to read from ZFS faces this issue actually. I found something more interesting. Lets presume I have 10 big dirs in . Issuing tar|dd command on separate dirs (so spawning 10 "threads") simultaneously will result in 10x faster reads cumulatively (I saw 15Mb/s in zpool iostat), and disk load may be as high as 70%. So I think there is some read throttling thing that limits read speed per read(). But still no clue where to find this bottleneck. -- Wbr, Boris.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAEJYa-TKjQtoR_39yDJVW8vxPDyLOMvBWsyvUVQ5gRzji%2BmSiQ>