Date: Mon, 16 Jan 2017 16:19:59 +0530 From: Aijaz Baig <aijazbaig1@gmail.com> To: Jan Bramkamp <crest@rlwinm.de> Cc: freebsd-scsi@freebsd.org Subject: Re: Understanding the rationale behind dropping of "block devices" Message-ID: <CAHB2L%2Bfxr0%2BPquAXtznMFkhr%2BVGEX%2BdY03EkyWM4Ef6qvMLoXQ@mail.gmail.com> In-Reply-To: <CAHB2L%2Bd1XG096SumiAk3VS7AE4cFLPfSCnCEjcWNXAeOxp2QCg@mail.gmail.com> References: <CAHB2L%2BdRbX=E9NxGLd_eHsEeD0ZVYDYAx2k9h17BR0Lc=xu5HA@mail.gmail.com> <20170116071105.GB4560@eureka.lemis.com> <CAHB2L%2Bd9=rBBo48qR%2BPXgy%2BJDa=VRk5cM%2B9hAKDCPW%2BrqFgZAQ@mail.gmail.com> <a86ad6f5-954d-62f0-fdb3-9480a13dc1c3@freebsd.org> <29469.1484559072@critter.freebsd.dk> <3a76c14b-d3a1-755b-e894-2869cd42aeb6@rlwinm.de> <CAHB2L%2Bd1XG096SumiAk3VS7AE4cFLPfSCnCEjcWNXAeOxp2QCg@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
I must add that I am getting confused specifically between two different things here: >From the replies above it appears that all disk accesses have to go through the VM subsystem now (so no raw disk accesses) however the arch handbook says raw interfaces are the way to go for disks ( https://www.freebsd.org/doc/en/books/arch-handbook/driverbasics-block.html)? Secondly, I presume that the VM subsystem has it's own caching and buffering mechanism that is independent to the file system so an IO can choose to skip buffering at the file-system layer however it will still be served by the VM cache irrespective of whatever the VM object maps to. Is that true? I believe this is what is meant by 'caching' at the VM layer. Any comments? On Mon, Jan 16, 2017 at 4:09 PM, Aijaz Baig <aijazbaig1@gmail.com> wrote: > Oh thank you everyone for clearing the air a bit. Although for a noob like > myself, that was mighty concise! > > Nevertheless, let me re-iterate what has been summarized in the last two > mails so I know I got exactly what was being said. > > Let me begin that I come from the Linux world where there has > traditionally been two separate caches, the "buffer cache" and the "page > cache" although almost all IO is now driven through the "page cache". The > buffer cache still remains however it now only caches disk blocks ( > https://www.quora.com/What-is-the-difference-between- > Buffers-and-Cached-columns-in-proc-meminfo-output). So 'read' and 'write' > were satisfied through the buffer cache whereas 'fwrite/read', 'mmap' went > through the page cache (which was actually populated by reading the buffer > cache thereby wasting almost twice the memory and compute cycles). Hence > the merging. > > Nevertheless, as had been mentioned by Julian, it appears that there is no > "buffer cache" so to speak (is that correct Julian??) > > If you want device M, at offset N we will fetch it for you from the > device, DMA'd directly into your address space, but there is no cached > copy. > > Instead it appears FreeBSD has a generic 'VM object' that is used to > address myriad entities including disks and as such all operations have to > go through the VM subsystem now. Does that also mean that there is no way > an application can directly use raw disks? At least it appears so > > The added complexity of carrying around two alternate interfaces to the > same devices was judged by those who did the work to be not worth the small > gain available to the very few people who used raw devices > > Thank you for all your inputs and waiting to hear more! Al though a bit > more context would really help noobs (both to enterprise storage and > FreeBSD) like me! > > On Mon, Jan 16, 2017 at 3:56 PM, Jan Bramkamp <crest@rlwinm.de> wrote: > >> On 16/01/2017 10:31, Poul-Henning Kamp wrote: >> >>> -------- >>> In message <a86ad6f5-954d-62f0-fdb3-9480a13dc1c3@freebsd.org>, Julian >>> Elischer >>> writes: >>> >>> Having said that, it would be trivial to add a 'caching' geom layer to >>>> the system but that has never been needed. >>>> >>> >>> A tinker-toy-cache like that would be architecturally disgusting. >>> >>> The right solution would be to enable mmap(2)'ing of disk(-like) >>> devices, leveraging the VM systems exsting code for caching and >>> optimistic prefetch/clustering, including the very primitive >>> cache-control/visibility offered by madvise(2), mincore(2), mprotect(2), >>> msync(2) etc. >>> >>> Enabling mmap(2) on devices would be nice, but it would also create >> problems with revoke(2). The revoke(2) syscall allows revoking access to >> open devices (e.g. a serial console). This is required to securely logout >> users. The existing file descriptors are marked as revoked an will return >> EIO on every access. How would you implement gracefully revoking mapped >> device memory? Killing all those processes with SIGBUS/SIGSEGV would keep >> the system secure, but it would be far from elegant. >> _______________________________________________ >> freebsd-scsi@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-scsi >> To unsubscribe, send any mail to "freebsd-scsi-unsubscribe@freebsd.org" >> > > > > -- > > Best Regards, > Aijaz Baig > -- Best Regards, Aijaz Baig
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAHB2L%2Bfxr0%2BPquAXtznMFkhr%2BVGEX%2BdY03EkyWM4Ef6qvMLoXQ>