Date: Fri, 25 Oct 2013 14:52:31 +0300 From: Vitalij Satanivskij <satan@ukr.net> To: Andriy Gapon <avg@FreeBSD.org> Cc: Vitalij Satanivskij <satan@ukr.net>, freebsd-hackers@FreeBSD.org Subject: Re: FreeBSD 10.0-BETA1 #8 r256765M spend too much time in locks Message-ID: <20131025115231.GA4274@hell.ukr.net> In-Reply-To: <526A4306.2060500@FreeBSD.org> References: <20131024074826.GA50853@hell.ukr.net> <20131024075023.GA52443@hell.ukr.net> <20131024115519.GA72359@hell.ukr.net> <20131024165218.GA82686@hell.ukr.net> <526A11B2.6090008@FreeBSD.org> <20131025072343.GA31310@hell.ukr.net> <526A4306.2060500@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Thank you for help. I will try you recomendation next time when load grows. Just now some system was rebooted with some configuration changes, so to get high load we need to wait some time. Andriy Gapon wrote: AG> on 25/10/2013 10:23 Vitalij Satanivskij said the following: AG> > AG> > AG> > http://quad.org.ua/profiling.tgz AG> > AG> > results of both methods AG> > AG> > but for pmcstat to few buffers configured by default so not all statistics in summary ^( AG> AG> From these profiling results alone I do not see pathologies. AG> It looks like you have a lot of I/O going on[*]. AG> My guess is that the I/O requests are sufficiently small and contiguous, so ZFS AG> performs a lot for I/O aggregation. For that it allocates and then frees a lot AG> of temporary buffers. AG> And it seems that that's where the locks are greatly contended and CPU is AG> burned. Specifically in KVA allocation in vmem_xalloc/vmem_xfree. AG> AG> You can try at least two approaches. AG> AG> 1. Disable I/O aggregation. AG> See the following knobs: AG> vfs.zfs.vdev.aggregation_limit: I/O requests are aggregated up to this size AG> vfs.zfs.vdev.read_gap_limit: Acceptable gap between two reads being aggregated AG> vfs.zfs.vdev.write_gap_limit: Acceptable gap between two writes being aggregated AG> AG> 2. Try to improve buffer allocation performance by using uma(9) for that. AG> vfs.zfs.zio.use_uma=1 AG> This is a boot time tunable. AG> AG> Footnotes: AG> [*] But perhaps there is some pathology that causes all that I/O to happen. I AG> can't tell that from the profiling data. So this could be another thing to try AG> to check. AG> AG> > Andriy Gapon wrote: AG> > AG> AG> > AG> When that high load happens again could you please run some profiling tool that AG> > AG> is capable of capturing the whole stacks of hot code paths? AG> > AG> AG> > AG> I can suggest two alternatives: AG> > AG> AG> > AG> 1. hwpmc AG> > AG> pmcstat -S instructions -O sample.out AG> > AG> pmcstat -R sample.out -G summary.out AG> > AG> AG> > AG> 2. The following DTrace script: AG> > AG> AG> > AG> profile:::profile-1113 AG> > AG> /!(curthread->td_flags & 0x20)/ AG> > AG> { AG> > AG> AG> > AG> @stacks[stack()] = count(); AG> > AG> } AG> > AG> AG> > AG> END AG> > AG> { AG> > AG> trunc(@stacks, 10); AG> > AG> printa(@stacks); AG> > AG> } AG> > AG> -- AG> > AG> Andriy Gapon AG> > AG> _______________________________________________ AG> > AG> freebsd-hackers@freebsd.org mailing list AG> > AG> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers AG> > AG> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org" AG> > AG> AG> AG> -- AG> Andriy Gapon AG> _______________________________________________ AG> freebsd-hackers@freebsd.org mailing list AG> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers AG> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20131025115231.GA4274>