Date: Thu, 20 Oct 2016 20:34:59 +0500 From: "Eugene M. Zheganin" <emz@norma.perm.ru> To: freebsd-stable@freebsd.org Subject: Re: zfs, a directory that used to hold lot of files and listing pause Message-ID: <d3b23040-c933-9ad8-efa3-621313f4064e@norma.perm.ru> In-Reply-To: <CANwv7WsKQy9pWOyvbFscB0FviNtVw%2BNgn7EMyNv-kppUp1cxfQ@mail.gmail.com> References: <4d9269af-ed64-bb73-eb7f-98a3f5ffd5a2@norma.perm.ru> <CANwv7WsKQy9pWOyvbFscB0FviNtVw%2BNgn7EMyNv-kppUp1cxfQ@mail.gmail.com>
index | next in thread | previous in thread | raw e-mail
Hi.
On 20.10.2016 18:54, Nicolas Gilles wrote:
> Looks like it's not taking up any processing time, so my guess is
> the lag probably comes from stalled I/O ... bad disk?
Well, I cannot rule this out completely, but first time I've seen this
lag on this particular server about two months ago, and I guess two
months is enough time for zfs on a redundant pool to ger errors, but as
you can see:
]# zpool status
pool: zroot
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: resilvered 5.74G in 0h31m with 0 errors on Wed Jun 8 11:54:14 2016
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/zroot0 ONLINE 0 0 0 block size: 512B
configured, 4096B native
gpt/zroot1 ONLINE 0 0 0
errors: No known data errors
there's none. Yup, disks have different sector size, but this issue
happened with one particular directory, not all of them. So I guess this
is irrelevant.
> Does a second "ls" immediately returned (ie. metadata has been
> cached) ?
Nope. Although the lag varies slightly:
4.79s real 0.00s user 0.02s sys
5.51s real 0.00s user 0.02s sys
4.78s real 0.00s user 0.02s sys
6.88s real 0.00s user 0.02s sys
Thanks.
Eugene.
home |
help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d3b23040-c933-9ad8-efa3-621313f4064e>
