Date: Mon, 12 Jul 1999 19:38:43 -0700 (PDT) From: Matthew Dillon <dillon@apollo.backplane.com> To: Mike Smith <mike@smith.net.au> Cc: Mike Haertel <mike@ducky.net>, Luoqi Chen <luoqi@watermarkgroup.com>, dfr@nlsystems.com, jeremyp@gsmx07.alcatel.com.au, freebsd-current@FreeBSD.ORG Subject: Re: "objtrm" problem probably found (was Re: Stuck in "objtrm") Message-ID: <199907130238.TAA73524@apollo.backplane.com> References: <199907130209.TAA03301@dingo.cdrom.com>
next in thread | previous in thread | raw e-mail | index | archive | help
:
:> Although function calls are more expensive than inline code,
:> they aren't necessarily a lot more so, and function calls to
:> non-locked RMW operations are certainly much cheaper than
:> inline locked RMW operations.
:
:This is a fairly key statement in context, and an opinion here would
:count for a lot; are function calls likely to become more or less
:expensive in time?
In terms of cycles, either less or the same. Certainly not more.
If you think about it, a function call is nothing more then a save, a jump,
and a retrieval and jump later on. On intel the save is a push, on other
cpu's the save may be to a register (which is pushed later if necessary).
The change in code flow used to be the expensive piece, but not any
more. You typically either see a branch prediction cache (Intel)
offering a best-case of 0-cycle latency, or a single-cycle latency
that is slot-fillable (MIPS).
Since the jump portion of a subroutine call to a direct label is nothing
more then a deterministic branch, the branch prediction cache actually
operates in this case. You do not quite get 0-cycle latency due to
the push/pop, and potential arguments, but it is very fast.
-Matt
Matthew Dillon
<dillon@backplane.com>
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199907130238.TAA73524>
