Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 25 Oct 2013 13:08:06 +0300
From:      Andriy Gapon <avg@FreeBSD.org>
To:        Vitalij Satanivskij <satan@ukr.net>
Cc:        freebsd-hackers@FreeBSD.org
Subject:   Re: FreeBSD 10.0-BETA1 #8 r256765M spend too  much time in locks
Message-ID:  <526A4306.2060500@FreeBSD.org>
In-Reply-To: <20131025072343.GA31310@hell.ukr.net>
References:  <20131024074826.GA50853@hell.ukr.net> <20131024075023.GA52443@hell.ukr.net> <20131024115519.GA72359@hell.ukr.net> <20131024165218.GA82686@hell.ukr.net> <526A11B2.6090008@FreeBSD.org> <20131025072343.GA31310@hell.ukr.net>

next in thread | previous in thread | raw e-mail | index | archive | help
on 25/10/2013 10:23 Vitalij Satanivskij said the following:
> 
> 
> http://quad.org.ua/profiling.tgz
> 
> results of both methods
> 
> but for pmcstat to few buffers configured by default so not all statistics in summary ^( 

>From these profiling results alone I do not see pathologies.
It looks like you have a lot of I/O going on[*].
My guess is that the I/O requests are sufficiently small and contiguous, so ZFS
performs a lot for I/O aggregation.  For that it allocates and then frees a lot
of temporary buffers.
And it seems that that's where the locks are greatly contended and CPU is
burned.  Specifically in KVA allocation in vmem_xalloc/vmem_xfree.

You can try at least two approaches.

1. Disable I/O aggregation.
See the following knobs:
vfs.zfs.vdev.aggregation_limit: I/O requests are aggregated up to this size
vfs.zfs.vdev.read_gap_limit: Acceptable gap between two reads being aggregated
vfs.zfs.vdev.write_gap_limit: Acceptable gap between two writes being aggregated

2. Try to improve buffer allocation performance by using uma(9) for that.
vfs.zfs.zio.use_uma=1
This is a boot time tunable.

Footnotes:
[*] But perhaps there is some pathology that causes all that I/O to happen.  I
can't tell that from the profiling data.  So this could be another thing to try
to check.

> Andriy Gapon wrote:
> AG> 
> AG> When that high load happens again could you please run some profiling tool that
> AG> is capable of capturing the whole stacks of hot code paths?
> AG> 
> AG> I can suggest two alternatives:
> AG> 
> AG> 1. hwpmc
> AG> pmcstat -S instructions -O sample.out
> AG> pmcstat -R sample.out -G summary.out
> AG> 
> AG> 2. The following DTrace script:
> AG> 
> AG> profile:::profile-1113
> AG> /!(curthread->td_flags & 0x20)/
> AG> {
> AG> 
> AG>         @stacks[stack()] = count();
> AG> }
> AG> 
> AG> END
> AG> {
> AG>         trunc(@stacks, 10);
> AG>         printa(@stacks);
> AG> }
> AG> -- 
> AG> Andriy Gapon
> AG> _______________________________________________
> AG> freebsd-hackers@freebsd.org mailing list
> AG> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> AG> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"
> 


-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?526A4306.2060500>