Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Oct 2021 16:23:18 -0700
From:      Devin Teske <dteske@freebsd.org>
To:        Mateusz Piotrowski <0mp@FreeBSD.org>
Cc:        dtrace@freebsd.org
Subject:   Re: Measuring performance impact of tracing a system under load
Message-ID:  <E62DFE70-4749-46E4-A796-B932847DD311@freebsd.org>
In-Reply-To: <f9f0b1db-cb6e-408c-dc6c-f0c8dd680998@FreeBSD.org>
References:  <f9f0b1db-cb6e-408c-dc6c-f0c8dd680998@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help


> On Oct 18, 2021, at 2:32 PM, Mateusz Piotrowski <0mp@FreeBSD.org> =
wrote:
>=20
> Hello everyone,
>=20
> I would like to measure the overhead / performance impact that of =
tracing with DTrace has on a system under load. I'm looking for the =
right way to do it. Obviously, I should not be using DTrace to measure =
that in this situation. I could use other stats(7) tools but it does not =
seem. Measuring the impact indirectly sounds like a good idea (e.g., by =
looking at the throughput between two machines in a test lab as they =
transfer bytes from A to B in two scenarios: with active tracing and =
without). Are there any other options? Do you have any insights to share =
regarding measuring DTrace overhead?
>=20
> I'd be grateful for all suggestions and pointers.
>=20

I have done this for several sub-systems when I had to =E2=80=94 for =
work =E2=80=94 show the progress of bpftrace against DTrace and =
formulate plans for sampling strategies based on observed overhead.

Running in one shell:

dd if=3D/dev/urandom of=3D/tmp/trashfile bs=3D1 count=3D5000000

And another [Linux] shell:

bpftrace -e 'tracepoint:syscalls:sys_enter_read { }=E2=80=99

This resulted in a 23% performance hit on the dd while bpftrace was =
running. We ran the test 10x and on multiple pieces of hardware, under =
multiple Operating Systems, and tried our best to control the testing =
environment parameters.

The hit on FreeBSD was barely noticeable. I think I calculated it as =
less than 6% at its worst.

By the way, did you know you can run DTrace on CentOS?

cd /etc/yum.repos.d
wget http://public-yum.oracle.com/public-yum-ol7.repo
yum install -y kernel-uek{,-devel} dtrace-utils
# EFI
cp /boot/efi/EFI/centos/grub.cfg{,.`date +%s`.bak}
grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
# BIOS
grubby --set-default-index 0
reboot

ASIDE: And it=E2=80=99s actually fun to use and quite powerful. Back to =
the question at-hand though =E2=80=A6

I have done similar things with =E2=80=9Cnc=E2=80=9D in place of =
=E2=80=9Cdd=E2=80=9D

I also suggest checking out =E2=80=9Cdpv=E2=80=9D in FreeBSD base for =
rate monitoring with and with-out tracing active to gain insights on =
overhead.
=E2=80=94=20
Devin=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E62DFE70-4749-46E4-A796-B932847DD311>