From owner-freebsd-arch@FreeBSD.ORG Fri Jun 6 00:45:28 2003 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 19A0937B401 for ; Fri, 6 Jun 2003 00:45:28 -0700 (PDT) Received: from puffin.mail.pas.earthlink.net (puffin.mail.pas.earthlink.net [207.217.120.139]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5819043F3F for ; Fri, 6 Jun 2003 00:45:27 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from user-38lc0sj.dialup.mindspring.com ([209.86.3.147] helo=mindspring.com) by puffin.mail.pas.earthlink.net with asmtp (SSLv3:RC4-MD5:128) (Exim 3.33 #1) id 19OBuY-0002JN-00; Fri, 06 Jun 2003 00:45:19 -0700 Message-ID: <3EE04642.602DA5EF@mindspring.com> Date: Fri, 06 Jun 2003 00:44:02 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: "Matthew D. Fuller" References: <20030603113927.I71313@cvs.imp.ch> <16092.35144.948752.554975@grasshopper.cs.duke.edu> <20030603115432.EGLB13328.out002.verizon.net@kokeb.ambesa.net> <20030603122226.BGPM11703.pop018.verizon.net@kokeb.ambesa.net> <3EDD81A4.B6F83135@mindspring.com> <3EDDF732.1060606@tcoip.com.br> <3EDF2B1C.6E9C892E@mindspring.com> <20030605221114.GB51432@over-yonder.net> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-ELNK-Trace: b1a02af9316fbb217a47c185c03b154d40683398e744b8a4bdb2088eb72a59f3167b61d5b7674ca8350badd9bab72f9c350badd9bab72f9c350badd9bab72f9c cc: arch@freebsd.org Subject: Re: Making a dynamically-linked root X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Jun 2003 07:45:28 -0000 "Matthew D. Fuller" wrote: > On Thu, Jun 05, 2003 at 04:35:56AM -0700 I heard the voice of > Terry Lambert, and lo! it spake thus: > > > > And if init or mount gets hosed? > > Oh, come on. You're smarter than that. > > If a static /sbin/init gets hosed, you're screwed. > If a dynamic /sbin/init gets hosed, you're screwed. > > If /lib/libc gets hosed, your dynamic /sbin/init is screwed. Your static > /sbin/init still moves along just fine. > > It's not that static binaries eliminate SPoF's. They just reduce the > scope of some failures. Whether that reduction is sizeable or lost in > the noise is left as an exercise to the reader (presuming the reader > understands the concept of "different strokes"). I'd argue that it's lost in the noise. You can divide the world into two types of installations: those that you can readily get to the physical hardware, including a console, and those where you can't. In the case where you can get to the console, there is no real additional hardship with dynamic libraries: a recovery CD or floppy boot is equally applicable as going through the obstacle course of trying to find the set of commands that are non-corrupt which will allow you to perform a recovery operation on the remainder of the commands. The smart admin simply boots an install of the version they had installed before the corruption, selects "upgrade" from the sysinstall menu, and lets it automatically recover everything except the sources, which sysinstall refuses to install over. In the case where you have to access the system remotely, there are basically two default choices: serial console and ssh. For the serial console, there are a much larger number of single points of failure between you and your statically linked shell, in /boot: boot0, boot1, boot2, kernel/kernel.ko, loader, loader.4th, loader.rc, defaults/loader.conf, and any user installed configuration files (there's at least one, to drop the "-P" in so the serial console is active). Minimally, this means that at most, adding another point of failure only increases your odds of failure by 1 in 10, if you have to do anything, and you include only "mount -u -o rw /" and "/bin/sh". The actual number is much lower than that: you have to include the terminal server, etc.., and the minimum subset of software needed to get the system back to a functioning state. If this needs a "make install" or a compile of any kind, then the additional risk is comparatively infinitesimal, since most of the tools between you and the system being alive again are dynamically linked ("install", etc.). In the ssh case, well, there's even more stuff between you and your statically linked shell, since you can't get in until the system is fully up; even then, dynamic linking doesn't increase your risk at all: # ldd /usr/sbin/sshd /usr/sbin/sshd: libopie.so.2 => /usr/lib/libopie.so.2 (0x2808b000) libmd.so.2 => /usr/lib/libmd.so.2 (0x28095000) libssh.so.2 => /usr/lib/libssh.so.2 (0x280a0000) libcrypt.so.2 => /usr/lib/libcrypt.so.2 (0x280d4000) libcrypto.so.2 => /usr/lib/libcrypto.so.2 (0x280ee000) libutil.so.3 => /usr/lib/libutil.so.3 (0x281b8000) libz.so.2 => /usr/lib/libz.so.2 (0x281c4000) libwrap.so.3 => /usr/lib/libwrap.so.3 (0x281d2000) libpam.so.2 => /usr/lib/libpam.so.2 (0x281db000) libc.so.5 => /usr/lib/libc.so.5 (0x281e3000) The telnetd, rshd, rlogind, and rexecd are all in the same boat. > > You're not so much missing anything as you are ignoring the > > examples which are inconvenient to arguing your position. > > A reasonable statement, but equally true in reverse. > > Dynamic _everything_ multiplies the number of single failures that can > completely screw you by making many more failures able to indirectly b0rk > basic things like "getting a shell". I admit that the current shells have a much larger library footprint than the former (a)sh or csh. But sshd requires a lot of those which are required by csh (at least), if not "sh". In the absolute worst case, the old shells could be installed as "oldreliablecsh" or "oldreliablesh", if you wanted to make a big deal about the other libraries being somehow more sensitive than the libraries sshd already depends upon. Note: $HOME mounts would also need to work, unless you went out of your way to change the default config files to permit root logins directly via ssh, instead of making people use user logins which could fail for lack of a home directory. > For extra points, find the false statement: > - Static-linked systems are immune from corruption failures. False. > - Dynamically-linked systems no more failure modes than static. "True", for those situations where you can't get at the console except via ssh, and "False to a small degree, but you don't care" for those situations where you can physically access the console, since you can reboot from and recover using standard install media. -- Terry