Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Nov 2023 14:04:39 +0700
From:      Eugene Grosbein <eugen@grosbein.net>
To:        Jonathan Chen <jonc@chen.org.nz>, freebsd-stable@freebsd.org
Subject:   Re: Unusual ZFS behaviour
Message-ID:  <bdaae481-427e-2ce0-008f-30516b9a47d7@grosbein.net>
In-Reply-To: <f8764549-773a-4695-b1fc-76e70e49de1b@chen.org.nz>
References:  <f8764549-773a-4695-b1fc-76e70e49de1b@chen.org.nz>

next in thread | previous in thread | raw e-mail | index | archive | help
22.11.2023 13:49, Jonathan Chen wrote:
> Hi,
> 
> I'm running a somewhat recent version of STABLE-13/amd64: stable/13-n256681-0b7939d725ba: Fri Nov 10 08:48:36 NZDT 2023, and I'm seeing some unusual behaviour with ZFS.
> 
> To reproduce:
>  1. one big empty disk, GPT scheme, 1 freebsd-zfs partition.
>  2. create a zpool, eg: tank
>  3. create 2 sub-filesystems, eg: tank/one, tank/two
>  4. fill each sub-filesystem with large files until the pool is ~80% full. In my case I had 200 10Gb files in each.
>  5. in one session run 'md5 tank/one/*'
>  6. in another session run 'md5 tank/two/*'
> 
> For most of my runs, one of the sessions against a sub-filesystem will be starved of I/O, while the other one is performant.
> 
> Is anyone else seeing this?

Please try repeating the test with atime updates disabled:

zfs set atime=off tank/one
zfs set atime=off tank/two

Does it make any difference?
Does it make any difference, if you import the pool with readonly=on instead?

Writing to ~80% pool is almost always slow for ZFS.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bdaae481-427e-2ce0-008f-30516b9a47d7>