Date: Fri, 18 Mar 2016 22:55:44 +0000 From: bugzilla-noreply@freebsd.org To: freebsd-bugs@FreeBSD.org Subject: [Bug 208130] smbfs is slow because it (apparently) doesn't do any caching/buffering Message-ID: <bug-208130-8@https.bugs.freebsd.org/bugzilla/>
next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D208130 Bug ID: 208130 Summary: smbfs is slow because it (apparently) doesn't do any caching/buffering Product: Base System Version: 10.2-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Only Me Priority: --- Component: kern Assignee: freebsd-bugs@FreeBSD.org Reporter: noah.bergbauer@tum.de CC: freebsd-amd64@FreeBSD.org CC: freebsd-amd64@FreeBSD.org I set up an smbfs mount on FreeBSD 10.2-RELEASE today and was noticed that = it's very slow. How slow? Some numbers: Reading a 600MB file from the share with= dd reports around 1 MB/s while doing the same in a Linux VM running inside bhy= ve on this very same machine yields a whopping 100 MB/s. I conclude that the S= MB server is irrelevant in this case. There's a recent [discussion](https://lists.freebsd.org/pipermail/freebsd-hackers/2015-Novem= ber/048597.html) about this on freebsd-hackers which reveals an interesting detail: The situation can be improved massivley up to around 60MB/s on the FreeBSD side just by using a larger dd buffer size (e.g. 1MB). Interestingly, using very small buffers has only negligible impact on Linux (until the whole affair g= ets CPU-bottlenecked of course). I know little about SMB but a quick network traffic analysis gives some insights: FreeBSD's smbfs seems to translate every read() call from dd dire= ctly into an SMB request. So with a small buffer size of e.g. 1k, something like this seems to happen: * client requests 1k of data * client waits for a response (network round-trip) * client receives response * client hands data to dd which then issues another read() * client requests 1k of data * ... Note how we're spending most of our time waiting for network round-trips. Because a bigger buffer means larger SMB requests, this obviously leads to higher network saturation and less wasted time. I'm unable to spot a similar pattern on Linux. Here, a steady flow of data = is maintained even with small buffer sizes, so apparently some caching/bufferi= ng must be happening. Linux's cifs has a "cache" option and indeed, disabling = it produces exactly the same performance (and network) behavior I'm seeing on FreeBSD. So to sum things up: The fact that smbfs doesn't have anything like Linux's cache causes a 100-fold performance hit. Obviously, that's a problem. --=20 You are receiving this mail because: You are the assignee for the bug.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-208130-8>