Date: Wed, 09 Oct 2002 16:57:37 -0700 From: Wes Peters <wes@softweyr.com> To: Matthew Dillon <dillon@apollo.backplane.com> Cc: "Vladimir B. Grebenschikov" <vova@sw.ru>, Nate Lawson <nate@root.org>, arch@FreeBSD.ORG Subject: Re: Database indexes and ram (was Re: using mem above 4Gb was:swapon some regular file) Message-ID: <3DA4C271.37AACAA3@softweyr.com> References: <Pine.BSF.4.21.0210081209010.11243-100000@root.org> <1034105993.913.1.camel@vbook.express.ru> <200210082015.g98KFFrq084625@apollo.backplane.com> <1034109053.913.7.camel@vbook.express.ru> <200210082051.g98KpjU1084793@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Matthew Dillon wrote: > > :Mattew, please look at my initial posting. My idea is to extend ram > :available for storing such thing as index above 4Gb (actually about 3Gb) > :limit, if there more physical ram. Current mmap(read vm) implementation > :will map/cache only in memory below 4Gb not depending of amount of > :physical ram. > > Well, this has been discussed before. The issue with accessing ram > over 4GB, apart from the fact that the page tables double in size (you > have to use 64 bit pte's instead of 32 bit pte's) is that DMAing to/from > memory above 4GB can be rather tricky. This creates all sorts of > problem including not necessarily being able to read() or write() > above the 4G mark (in regards to physical ram) without a lot of mess > in the OS .. bounce buffers redux, so to speak. Linux solved this problem by refusing to do it. The candidates for DMA transfers include skbufs and buffers from the disk buffer pool, both of which are allocated from the lowest 4GB of physical ram when using PAE mode. > So while it would be possible use such memory as unswappable, unIOable > anonymous-only memory, such use would be fairly limited and might not > be worth implementing for a 32 bit platform. At that point you might > as well move to a 64 bit platform. Nah, it works great. Each process gets 3GB process virtual address and 1GB kernel virtual address and all of the program text+data can be located anywhere in physical ram. For things like databases that need large indeces in memory, this is a big win. > It also might be more effective to spend that money on more ram for > the RAID system backing the database rather then trying to bump the > PC past the 4G mark, or spend that money on purchasing a second > server and distributing the load across the two servers. The types Neither will help you with index sizes if you're using really honking big tables, where the index just won't fit. We actually use multiple processes to hold cached data, including indexes, in order to make use of the extra RAM. I should shut up now. ;^) > of accesses to the index that might result in cacheable table data are > also the types of accesses to the index that will likely result in > cacheable index data. Using the same argument, the types of accesses > that might result in an uncacheable index would also likely result in > uncacheable table data which means you are going to run up against > seek/read problems on the table data, making it more worthwhile to > spend the money on beefing up the storage subsystem. That's only true if your database server is I/O bound. Depending on your job mix, this may or may not be the problem. -- "Where am I, and what am I doing in this handbasket?" Wes Peters Softweyr LLC wes@softweyr.com http://softweyr.com/ To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3DA4C271.37AACAA3>