Date: Wed, 27 Jun 2018 13:01:40 -0400 From: Ryan Stone <rysto32@gmail.com> To: Alan Somers <asomers@freebsd.org> Cc: jkim@freebsd.org, Andriy Gapon <avg@freebsd.org>, FreeBSD Current <freebsd-current@freebsd.org> Subject: Re: TSC calibration in virtual machines Message-ID: <CAFMmRNyFppU94S=QjQGZY4RJau82xtg45csELC5q5Y35R7VwUw@mail.gmail.com> In-Reply-To: <CAOtMX2gcUybMhPdEzBWX07-oPdmJdqn%2BvW7KkNZvs2sFmcHFNw@mail.gmail.com> References: <8ac353c5-d188-f432-aab1-86f4ca5fd295@FreeBSD.org> <4d7957f6-9497-19ff-4dbb-436bb6b05a56@FreeBSD.org> <CAOtMX2gcUybMhPdEzBWX07-oPdmJdqn%2BvW7KkNZvs2sFmcHFNw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
I would guess that the calibration can fail because when running under the hypervisor, the FreeBSD guest code can be descheduled at the wrong time. As I recall, the current algorithm looks like: 1. Sample rdtsc 2. Use a fixed-frequency timer to busy-wait for exactly 1 second 3. Sample rdtsc again 4. tsc_freq = sample2 - sample1; If we are descheduled between 2 and 3, the time we spend off-cpu will not be accounted for at step 4. On bare-metal this is not possible as neither the scheduler nor interrupts are not running yet. Although, come to think of it, I seem to recall something about SMI interrupts mucking this up long in the past, for exactly the same reason.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFMmRNyFppU94S=QjQGZY4RJau82xtg45csELC5q5Y35R7VwUw>