From owner-freebsd-current Thu May 25 20:06:36 1995 Return-Path: current-owner Received: (from majordom@localhost) by freefall.cdrom.com (8.6.10/8.6.6) id UAA03179 for current-outgoing; Thu, 25 May 1995 20:06:36 -0700 Received: from Root.COM (implode.Root.COM [198.145.90.1]) by freefall.cdrom.com (8.6.10/8.6.6) with ESMTP id UAA03168 for ; Thu, 25 May 1995 20:06:30 -0700 Received: from corbin.Root.COM (corbin.Root.COM [198.145.90.18]) by Root.COM (8.6.8/8.6.5) with ESMTP id UAA29446; Thu, 25 May 1995 20:09:31 -0700 Received: from localhost (localhost [127.0.0.1]) by corbin.Root.COM (8.6.11/8.6.5) with SMTP id UAA00135; Thu, 25 May 1995 20:06:36 -0700 Message-Id: <199505260306.UAA00135@corbin.Root.COM> To: Bruce Evans cc: current@FreeBSD.org Subject: Re: newfs weirdness... In-reply-to: Your message of "Fri, 26 May 95 12:08:16 +1000." <199505260208.MAA16119@godzilla.zeta.org.au> From: David Greenman Reply-To: davidg@Root.COM Date: Thu, 25 May 1995 20:06:35 -0700 Sender: current-owner@FreeBSD.org Precedence: bulk >>>FreeBSD only supports 63 bit file system offsets. Files larger than 2GB >>>and mmapping at offsets larger than 2GB are currently broken. mmapping >>>of objects larger than 4G cannot work with the current interfaces. > >> Ummm, the filesystem layer supports 40 bit file system offsets (the >>amount of a blkno that can be stored in 31 bits). The VM system, however, >>is currently limited to 31 bit file offsets which is why it can't deal >>with files larger than 2GB. We're going to fix this limit in the VM system, >>however, for FreeBSD 2.2 to be 43 bits by changing the 'offset' to be a >>page offset rather than a byte offset. Once we do this, the limit will be >>imposed by bugs in the FS that store blkno in an int (the limit will then >>be 40 bits == 1TB). If we fix the FS bugs, then the limit will again be >>imposed by the VM system at 43 bits (8TB). I think limits in the terabyte >>range will be adequate for the medium term. > >The `size_t len' arg limits the size of objects that can be mapped. This >limit is fundamental - it would be hard to map objects larger than the >address space (no segments please). This limit will expand automatically >when integer sizes and/or address spaces expand, but so will some of the >other limits. The size of an object will either have to be stored in a long long or it will have to be stored in terms of pages, not bytes. Since the object size is always a multiple of pages, it makes sense for it to represent # of pages. ...of course you'll never be able to map more than just a portion of a > 3.75GB file, but at least you'll be able to get to the whole thing by mapping pieces at a time. -DG