Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 09 Jul 2009 23:53:55 +0200
From:      Ivan Voras <ivoras@freebsd.org>
To:        freebsd-arch@freebsd.org
Subject:   Re: DFLTPHYS vs MAXPHYS
Message-ID:  <h35oto$vkd$1@ger.gmane.org>
In-Reply-To: <d763ac660907051158i256c0f93n4a895a992c2a8c34@mail.gmail.com>
References:  <4A4FAA2D.3020409@FreeBSD.org>	<20090705100044.4053e2f9@ernst.jennejohn.org>	<4A50667F.7080608@FreeBSD.org> <20090705223126.I42918@delplex.bde.org>	<4A50BA9A.9080005@FreeBSD.org> <20090706005851.L1439@besplex.bde.org>	<4A50DEE8.6080406@FreeBSD.org> <20090706034250.C2240@besplex.bde.org>	<4A50F619.4020101@FreeBSD.org> <d763ac660907051158i256c0f93n4a895a992c2a8c34@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig93BC92D1C01CFF2631BD80B1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Adrian Chadd wrote:
> 2009/7/6 Alexander Motin <mav@freebsd.org>:
>=20
>> In this tests you've got almost only negative side of effect, as you h=
ave
>> said, due to cache misses. Do you really have CPU with so small L2 cac=
he?
>> Some kind of P3 or old Celeron? But with 64K MAXPHYS you just didn't g=
et any
>> benefit from using bigger block size.
>=20
> All the world isn't your current desktop box with only SATA devices :)
>
> There have been and will be plenty of little embedded CPUs with tiny
> amounts of cache for quite some time to come.

Yes, and no embedded developer will use the GENERIC kernel on his device
so we can, for this purpose, ignore them :)

> You're also doing simple stream IO tests. Please re-think the thought
> experiment with a whole lot of parallel IO going on rather than just
> straight single stream IO.

Also, one thing to remember is RAID, both hardware and software. For
example, with gstripe of two drives it's very visible how sharply the
performance falls if you go from 32 kB stripes to 64 kB stripes, since
the upper layer passes 64 kB requests to GEOM. GEOM will pass the
request to gstripe, which will in the first case request 32 kB from each
drive (faster) and in the second case only 64 kB from one of the drives
(no performance gain from striping).

(please adjust for 32/64 -> 64/128 if appropriate, I don't have the raw
numbers now)

Of course it's not a reason as-is but both Windows and Linux have 1 MB
BIO buffers so it's reasonable to assume that vendors will optimize for
that size if they can.


--------------enig93BC92D1C01CFF2631BD80B1
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkpWZvMACgkQldnAQVacBcjSowCcC6dSaIRxKirDMfjnEWywOnNB
h6AAn0sirNEORJhrbcS7I9pto9UMDwA/
=tvMf
-----END PGP SIGNATURE-----

--------------enig93BC92D1C01CFF2631BD80B1--




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?h35oto$vkd$1>