Date: Tue, 16 Aug 2005 12:06:32 -0400 From: Scott Long <scottl@samsco.org> To: Martin Cracauer <cracauer@cons.org> Cc: freebsd-amd64@freebsd.org Subject: Re: 4 GB RAM showing up as 3, BIOS memory hole and all that Message-ID: <43020F08.6070108@samsco.org> In-Reply-To: <20050816111428.A24284@cons.org> References: <20050815125657.A92343@cons.org> <20050815172250.GA32804@troutmask.apl.washington.edu> <20050815183846.A99145@cons.org> <430173BE.4080802@samsco.org> <20050816111428.A24284@cons.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Martin Cracauer wrote: >>You are definitely going to loose the address space from 3.75GB to 4GB >>to normal PCI and APIC overhead. If you have PCI Express then you'll >>likely also loose the space from 3.5 to 3.75GB. Any other space lost >>beyond that would be possible but usual. And no, most AMD systems do >>*NOT* remap the lost space. >> >>If you want a detailed analysis then please send a verbose boot message >>along with a pointer to the specs on your motherboard. That will give >>enough to information to say if FreeBSD is at fault or if your >>motherboard is simply sub-standard. > > > If you have time for explanations I would rather be interested to know > how the above situation works in the case that you *don't* have 4 GB > of RAM. > > Correct me if I'm wrong but: > - the above 3.5 to 4.0 GB addresses are physical, not virtual > - if you have less than 3.5 GB there is no physical memory at these > addresses that PCI and APIC need > - but if you have 4 GB there is physical memory at these addresses > > I don't understand how this can transparently work with and without > physical RAM at 3.5-4.0 GB. > > > I am not worrying too much about my particular board, and in any case > I just fatfingered a BIOS update and it won't POST :-) > It's a Arima Rioworks HDAMB http://www.rioworks.com/HDAMB.htm If > people are interested I can send the boot -v afer I recover. > > Thanks > Martin First thing to keep in mind is that I'm talking about physical addresses, not OS-specific virtual addresses. Second thing to keep in mind is that I'm talking about *address space*, not *RAM*. There is a big different here. An address represents a location where data can be stored or retrieved. That location does _not_ have to be RAM. It could be a register on a APIC chip, or a memory array on a PCI card, or a location in a local RAM chip. PCI (AGP is really just like PCI from this perspective) specifically allows the CPU to access registers and memory arrays on the cards as if they were local addresses, that's the point of the MEMIO registers and Base Address Registers. When the CPU does a load or store of an address that falls into these address ranges, the request doesn't go to RAM, it goes to the PCI bus and is serviced by the appropriate card there. Local RAM doesn't get involved at all. PCI doesn't actually care much which addresses are used, but by convention the PC platform puts them at the top of the 32-bit address space. But, what happens when you have so much RAM that the RAM could service those very high addresses? For many years that wasn't an issue because it wasn't possible or practical to put that much RAM into a PC. But now it is, so it's up to the memory controller and host bridge to figure out what to do. Many systems cause that high RAM to simply be ignored, resulting in the loss of effective RAM (as you saw in your case). More complex systems will take the RAM that would occupy that 3.5-4GB address space and re-map it into the 4.0-4.5 address space. The RAM doesn't care because it's just an array of storage cells, it's up to the memory controller to associate addresses with those storage cells. Of course, that only works if you're using a 64-bit (or 32bit PAE enabled) OS that can deal with physical addresses larger than 32 bits. Intel Xeon systems typically do the remapping trick, so when you boot FreeBSD i386+PAE or amd64 on them, they might show 4.5GB of RAM when there reall is only 4GB (this is a limitation of how we compute RAM and is purely cosmetic, but should be fixed). Many AMD Opteron systems do not do the remapping and result in you loosing effective RAM. The difference between Intel and AMD is because AMD puts the memory controller into the CPU instead of in the PCI host bridge, so it's much harder to have the two work together to do the remapping. I believe that there are some Opteron systems that can do this, though. A junior doc writer task would be for someone to collect all of the email responses that I give on this topic (I seem to get at least one query a month) and turn it into an FAQ for the FreeBSD doc set. Scott
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?43020F08.6070108>