Date: Mon, 20 Sep 1999 10:52:22 -0600 From: Nate Williams <nate@mt.sri.com> To: Jobe <jobe@attrition.org> Cc: "Rodney W. Grimes" <freebsd@gndrsh.dnsmgr.net>, security@FreeBSD.ORG Subject: Re: Real-time alarms Message-ID: <199909201652.KAA01218@mt.sri.com> In-Reply-To: <Pine.LNX.3.96.990920000340.13128J-100000@forced.attrition.org> References: <199909200629.XAA57821@gndrsh.dnsmgr.net> <Pine.LNX.3.96.990920000340.13128J-100000@forced.attrition.org>
next in thread | previous in thread | raw e-mail | index | archive | help
[ Mondo snipper ] > Basically what I was trying to get at is that we really need to know what > 'events' (in kernel happenings) that we want to be aware of. Otherwise we > will end up developing a method for generating alarms for kernel events > when there are no given events to generate alarms for. Ahh, but there's no easy (extensible, effecient) way of setting policy in the kernel that doesn't have significant side-effects. > Do you see what > I'm trying to get at here? Or is our primary goal to just create the > auditing system and let the user define the events for which alarms are > generated? That's my goal. If we 'audit' everything, then we can re-use the same information over and over in different contexts to do 'intrusion detection'. Case in point: A remote user logs in (not a bad event). Same user becomes root via su (not necessary a bad event). User edits /etc/passwd, and a bunch of files in /var/log/* (probably a bad sign :) Note, if we assumed that any write/read from /var/log is bad, then we'd generate a number of false alarms. Also, syslog writes to /var/log, and newsyslog rotates the files, so it's a normal occurance to see reads/writes to the file. A user-land process could also take advantage of the fact that we *may* instrument syslogd and newsyslog to generate a record stating that they are modifying a system file, thus making it that much harder to 'spoof' the fingerprint. As with all security measure, anything that can be done can be undone, but the harder you make it, the more likely the attacker will miss a step and setoff the 'detector'. Our job (as intrusion/security experts) is to minimize the number of 'detector' hits to *real* breakins, and to maximize the percentage that we recognize *real* breakins. The best products in the field today do about a 15% hit-rate of *real* breakins that happen on *real* networks, simply because they are using signature models that are based on old breakins, and only stupid people use old information to break into systems. These 'best' products also have a false hit rate of 100 false hits/day or worse, which makes it nearly impossible to differentiate between a 'real' breakin and a 'false' breakin without spending way too many resources evaluating each and every 'hit'. However, that's a side issue, in that once someone (you?) finds a good way to recognize breakins w/out relying on a using a 'signature analysis of existing known breakins' (using statistical analysis I would suspect, and you can analyze all sorts of behavior this way, including network traffic, user patters, etc....) then they can *still* use the same information gathered in /dev/audit to correctly recognize breakins. (This is assuming we've correctly/completely audited the system to give you the necessary information for you to make the decision.) Anyway, I'm digressing badly, so I'll shutup now. :) Nate To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-security" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199909201652.KAA01218>
