Date: Wed, 25 Jan 2006 15:09:20 +0200 From: Ian FREISLICH <if@hetzner.co.za> To: "Poul-Henning Kamp" <phk@phk.freebsd.dk> Cc: Alexander Leidinger <Alexander@Leidinger.net>, current@freebsd.org, arch@freebsd.org Subject: Re: [TEST/REVIEW] CPU accounting patches Message-ID: <E1F1kOm-000FY2-8Z@hetzner.co.za> In-Reply-To: Message from "Poul-Henning Kamp" <phk@phk.freebsd.dk> of "Wed, 25 Jan 2006 11:58:07 %2B0100." <29245.1138186687@critter.freebsd.dk>
next in thread | previous in thread | raw e-mail | index | archive | help
"Poul-Henning Kamp" wrote: > In message <20060125114544.edawx42obkkos0ck@netchild.homeip.net>, Alexander L ei > dinger writes: > > > >> That way, the user/system time reported will get units of "cpu seconds > >> if the cpu ran full speed". > > > >How large do you expect the error will be? > > I don't consider it an error, I consider it increasing precision. > > > If you run > > time mycommand > > on your laptop, and along the way the CPU clock ramps up from > 75 MHz to 600 MHz before it reports > > user 2.01 sys 0.30 real 4.00 > > What exactly have you learned from the first two numbers with the > current definition of "cpu second" ? "One second's worth of the computer's processing time, which is based on actual machine cycles used, not calendar time." ? Is the getrusage() manual page out of date? It claims that user and system time is is "the total amount of time spent executing in user mode" and "the total amount of time spent in the system executing on behalf of the process(es)". > With my definition you would be more likely to see lower numbers > maybe > user 0.20 sys 0.03 real 4.00 > > And they would have meaning, they should be pretty much the same > no matter what speed your CPU runs at any instant in time. For how much of those 4 real seconds was the computer doing something else using your definition? It's certainly not 3.77. It's probably closer to 1.69. > In theory, it should be possible to compare user/sys numbers > you collect while running at 75 MHz with the ones you got > under full steam at 1600 MHz. If my CPU clock runs slower for a period of time, processes remain on the CPU for longer. I don't really see how 0.23 [wallclock] seconds _if_ the cpu ran [at] full speed is different to 2.31 wallclock seconds in this context. One is scaled to maximum CPU clock frequency and the other is scaled to wallclock time. I find the wallclock scale a bit less confusing because I normally exist in that scale[1]: on my two hypothetical identical servers, one clocked down to 50% for some reason, the same job takes twice the wallclock time but identical CPU time? Ian -- Ian Freislich 1. It would be nice to say to my boss that this project would have taken a week if I'd worked faster and get a fat bonus because I could have done it faster.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E1F1kOm-000FY2-8Z>