Date: Tue, 2 Jun 1998 18:10:01 +0400 (MSD) From: bag@sinbin.demos.su (Alex G. Bulushev) To: julian@whistle.com (Julian Elischer) Cc: eivind@yes.no, sepotvin@videotron.ca, current@FreeBSD.ORG Subject: Re: I see one major problem with DEVFS... Message-ID: <199806021410.SAA06048@sinbin.demos.su> In-Reply-To: <3572FBD0.33590565@whistle.com> from "Julian Elischer" at "Jun 1, 98 12:06:56 pm"
next in thread | previous in thread | raw e-mail | index | archive | help
> THis is the single best argument I've heard for allowing > devfs type nodes on a normal fs. :-) > > certainly DEVFS makes the case of providing devices to chroot > environments a lot more 'heavyweight' more more 'heavyweight' > > A number of things to note about this: > 1/ There is a suggestion that there be a mount option that simply > mounts an EMPTY devfs, which would then be populatable using some > form of mknod (which uses the name to create the device and not the > major/minor) > > 2/ one would need to do this on each reboot or login.. > alternatively a single master might exist and be referenced by > a nullfs mount, unless they all wanted different devices. > (e.g just their own tty device) > > I agonised over this when trying to figure out a way of making > dynamic devices. I eventually came to the conclusion that > leaving devices around across reboots wa more of a security > risk than recreating them to a known state on boot or when required. may by special "devfs" layer in generic fs structure solve some kind of problem? files of type 'device' treated by special way via file name or special id while fs mount or device create via mknod for this purpose it is necessary special dirent+inode chain in fs for initializing devfs layer while mount ... or the best way use special file with device path/name's for each fs (like quota.user quota.group) for example: mount /dev/sd0e /mnt ^- without devfs layer initialization mount -o devname=/var/devname/dev.name /dev/sd0e /mnt ^- with devfs layer initialization /var/devname/dev.name contain: relative path device name /dir1/dev/ ttyp0 /dir1/dev/ ttyp1 /dir2/dev/ ttyp0 /dir2/dev/ ttyp1 this way we can use special part of standart mount procedure instead of custom scripts (for mknod/rm) and the ability to mount of devfs and now we may mount via nfs: mount -t nfs -o devname=/var/devname/dev.name nfs.server.net:/mnt /mnt ^^^^^^^^^^^^^^^^^^^^^ local file > > My guess is that each VM (virtual machine?) would either have it's > devices added as it is entered by a user, (or at least checked) > or at reboot time by some custom scripts > (You must be doing this with custom scripts anyway.) > > The two missing pieces are: > 1/ the ability to mount an empty devfs > 2/ the ability to create a single node in it (the reason for > this discussion) > > a workaround for the moment would be to mount a full one, and mv > the devices you need to .hold, rm -rf everything else, > and mv them back. not exelent ... :) > > julian > > > Alex G. Bulushev wrote: > > > > > On Sat, May 30, 1998 at 05:02:14PM -0400, Stephane E. Potvin wrote: > > > > Maybe this will seems a stupid question but why in the first place would > > > > someone want to delete a device from a devfs /dev? Or put differently why is > > > > not devfs append-only so someone would be able to make new links but not able > > > > to delete existing devices? > > > > > > For use in a chroot()'ed environment. > > > > there are several problems with dev's in a chroot'ed enviroment, > > for example a real system (we use it): > > 1. about 500 chroot'ed "virtual mashines", the /dev containes only > > necessary devices (tty??) for each VM (created by mknod when VM created) > > 2. users fs (on main server) with VM (end /dev for each VM) mounted via nfs > > on several hosts where users realy work (chroot on nfs) > > 3. each VM can created or deleted while system working on main server > > > > and what about future of this scheme with new devfs ideas? > > mount devfs for each VM on main server and hosts where users work? > > and unmount devfs on each host before VM deleted? > what do you mean 'server' and what do you mean by > "hosts where users work"? server == nfs server for all other hosts hosts where users work == nfs clients where users login and run nfs_server | private ethernet for nfs mount ---+---------+---------+--------+---------+-- | | | | | host1 host2 host3 host4 host5 <- hosts for user | | | | | login/run ---+---------+---------+--------+---------+--+ external ethernet | router | ---+---------+---------+--------+---------+--+ | | | | | TS1 TS2 TS3 TS4 TS5 <- terminal servers ^modem ^modem ^modem ^modem ^modems > > julian > > To Unsubscribe: send mail to majordomo@FreeBSD.org > with "unsubscribe freebsd-current" in the body of the message > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199806021410.SAA06048>