Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Jul 2009 10:10:28 -0700 (PDT)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        freebsd-arch@FreeBSD.org
Subject:   Re: DFLTPHYS vs MAXPHYS
Message-ID:  <200907071710.n67HASb7088248@apollo.backplane.com>
References:  <20090707151901.GA63927@les.ath.cx> <200907071639.n67GdBD2087690@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
    A more insideous problem here that I think is being missed is
    the fact that newer filesystems are starting to use larger filesystem
    block sizes.  I myself hit serious issues when I tried to create a
    UFS filesystem with a 64K basic filesystem block size a few years ago,
    and I hit similar issues with HAMMER which uses 64K buffers for bulk
    data which I had to fix by reincorporating code into ATA that had existed
    originally to break-up large single-transfer requests that exceeded the
    chipset's DMA capability.  In the case of ATA, numerous older chips
    can't even do 64K due to bugs in the DMA hardware.  Their maximum is
    actually 65024 bytes.

    Traditionally the cluster code enforced such limits but assumed that
    the basic filesystem block size would be small enough not to hit the
    limits.  It becomes a real problem when the filesystem itself wants to
    use a large basic block size. 

    In that respect hardware which is limited to 64K has serious consequences
    which cascade through to the VFS layers.

						-Matt




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200907071710.n67HASb7088248>