Date: Wed, 22 Mar 2000 16:08:53 -0800 (PST) From: Matthew Dillon <dillon@apollo.backplane.com> To: Jim Bryant <jbryant@ppp-207-193-186-239.kscymo.swbell.net> Cc: bright@wintelcom.net (Alfred Perlstein), rminnich@lanl.gov, hackers@FreeBSD.ORG, linux-kernel@vger.rutgers.edu, Michael.Lampe@iwr.uni-heidelberg.de Subject: Re: How a normal user can crash any linux system (fwd) Message-ID: <200003230008.QAA94325@apollo.backplane.com> References: <200003222155.PAA35282@ppp-207-193-186-239.kscymo.swbell.net>
next in thread | previous in thread | raw e-mail | index | archive | help
:In reply: :> This is cross-posted to both linux-kernel and freebsd-hackers, please :> set your replies properly. :> :> > I found the following by accident playing with PVM. If you start the :> > 'gexample' from the examples directory with dimension=10000 and no of :> > tasks=32 on one machine, it becomes almost immediately completely un- :> > usable and begins with heavy swapping. Considering how much memory :> > would be necessary for this computation before starting it would have :> > avoided the trouble. : :well, there are other ways to make a system slow to a crawl.... : :a good preventative measure is to never give shell accounts unless :everyone is accountable. : :#!/bin/csh :/usr/games/primes 1 4294967295 >&/dev/null& :/usr/games/primes 1 4294967295 >&/dev/null& :/usr/games/primes 1 4294967295 >&/dev/null& :... Generally speaking if a user wants to crash a machine, he can crash a machine. We've probably 'fixed' a dozen crashability holes in the last year but the count comes from a virtually unlimited supply. Short of being ridiculously draconian there will always be a way to twist the resource limits you are given to take a machine down or DOS it into unusability. And resource limits do not cover every conceivable facet. For example, just before the 4.0 release it was found that one can easily run the kernel out of vm_map_entry resources by mmap()ing hundreds of thousands of tiny files. We added a vm.max_proc_mmap sysctl to control that. And it still quite easy to bypass the datasize resource limit through the use of MAP_ANON mmap's. What resource limits are good at doing is preventing 'stupid user' mistakes. For example, at BEST we capped the CGI exec time at 10 cpu seconds (else our web servers would build up hundreds of broken CGI processes), but we set it as a soft limit so users who know what they are doing can unlimit it. I imposed a 40 process and 64 descriptor limit as being reasonable, but that's still 2560 descriptors to a user trying to mess things up on purpose. Quotas are useful for preventing a user from accidently filling up a disk partition. For example, I had a 100MB quota on /tmp, but quotas can be bypassed in a number of ways - for example, excessive syslogging (log files are owned by root). Resource limits are useful for protecting against a purposeful DOS attack insofar as it is generally the script kiddies who don't know their asses from their fingers who are trying to crash the machines. There is no way to protect against someone who knows what they are doing. So, in conclusion, resource limits are absolutely necessary in keeping a multi-user machine running smoothly, but the necessity has virtually nothing to do with preventing purposeful attacks. -Matt To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200003230008.QAA94325>