Date: Sat, 8 Apr 95 22:02 WET DST From: pete@pelican.com (Pete Carah) To: current@FreeBSD.org Subject: Re: Disk performance Message-ID: <m0rxp8a-000K0jC@pelican.com> In-Reply-To: <199504081952.MAA15923@gndrsh.aac.dev.com>
next in thread | previous in thread | raw e-mail | index | archive | help
In article <199504081952.MAA15923@gndrsh.aac.dev.com> rgrimes writes: (and various others, too): >> > > > Why would taking out the L2 cache slow down data transfer to and >> > > > from the primary cache? >> > > because checking the L2 takes time, and they don't start the mem-cycle >> > > until they know they missed. The SGI challenge starts both on the same clock, then throws away the mem data if some cache hit first. I don't know if they abort the CAS part of the memory cycle in that case, but I think not. That is part of the distributed-cache-coherency scheme too. Doing this hurts you badly in the absence of memory interleave, though, since you can't abort a RAS-cycle; you are better off waiting like they do in that case. >> > You would be right if he was talking about why turning off the L2 cache >> > increases memory speed. But that is not what he said ``taking out L2 >> > cache slowing down L1 cache''. Nothing, nota, zippo, should effect >> > L1 cache speeds other than code changes, and internal clock frequency. Possibly worse hit rates; isn't the L2 controller on the main chip too? Also L2 may be wider than main mem too and do background fetch of the other half; I don't know about that motherboard. Bigger ($$$$$) systems do this. Then L1 fills will come from the L2 about half more often than from main memory directly. -- Pete
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?m0rxp8a-000K0jC>