Date: Fri, 14 Aug 2020 13:47:52 -0500 From: Tim Daneliuk <tundra@tundraware.com> To: FreeBSD Mailing List <freebsd-questions@freebsd.org> Subject: Re: OT: Dealing with a hosting company with it's head up it's rear end Message-ID: <97fd6d35-ef35-8583-5ef2-3ea761c36c12@tundraware.com> In-Reply-To: <CAGBxaXkYpjUGwFwR-WZo9Ud0b_ZwmP7QVY74QH3vyt0Z12NmXQ@mail.gmail.com> References: <CAGBxaXmg0DGSEYtWBZcbmQbqc2vZFtpHrmW68txBck0nKJak=w@mail.gmail.com> <CAGBxaX=XbbFLyZm5-BO=6jCCrU%2BV%2BjubxAkTMYKnZZZq=XK50A@mail.gmail.com> <CALeGphwfr7j-xgSwMdiXeVxUPOP-Wb8WFs95tT_%2Ba8jig_Skxw@mail.gmail.com> <CAGBxaX=CXbZq-k6=udNaXTj2m%2BgnpDCB%2Bui4wgvtrzyHhjGeSw@mail.gmail.com> <40xvq0.qf0q3x.1hge1ap-qmf@smtp.boon.family> <CAGBxaX=9asO=X32RucVyNz5kppPhbZc9Ayx-pyiXMBi85BeJ6w@mail.gmail.com> <20200814004312.bb0dd9f1.freebsd@edvax.de> <20200814065701.2b390145ac6d189161bc31b4@sohara.org> <173ed205550.27bc.0b331fcf0b21179f1640bd439e3f4a1e@tundraware.com> <CAGBxaX=gs57EXsm028%2B6Var89MUoGh-7d1gfPdGmbm5gPBnufA@mail.gmail.com> <4d320acd-a995-7a35-5c0e-c2c22e7e6f96@radel.com> <CAGBxaXnjDAnZPjx_nksb_ed-f%2BX=PowLTUYMX706oMScd8HDaw@mail.gmail.com> <df55f102-228f-021d-62ba-b26520e78740@radel.com> <CAGBxaXkYpjUGwFwR-WZo9Ud0b_ZwmP7QVY74QH3vyt0Z12NmXQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 8/14/20 12:49 PM, Aryeh Friedman wrote: > If the controls can be circumvented they are essentially useless and > shouldn't be in place in the first place. Besides anyone who knows what > RDP or SSH is would also know how to circumvent controls designed for > non-technical people so that makes the blocking of them even more short > sighted. This is what I meant by security by obfuscation (i.e. hiding > obvious truths that everyone with any knowledge knows). I am not taking a position on whether or not blocking ssh is always good, bad, or irrelevant. However, I pretty fundamentally disagree with the position above as written. It is absolutely possible to dramatically reduce the technical attack surface by limiting what ports can be accessed on a given machine. For example, suppose I have some batch process that ingests data and produces some sort of results. Assume that I only permit the inbound data and outbound results to be made available over a single mechanism - let's use an MQ system if you like. No other ports of any kind are open beyond the TCP/IP interface to the MQ system. Let's further suppose that access to the MQ system, in- or outbound, is narrowly limited in time with dynamic firewalling/network rules. And let's harden this even more by making those inbound- and outbound payloads encrypted using one-time pad asymmetric keys. Can that system NEVER be compromised? Of course it can, but the compromise has to happen either at the physical server (or, by proxy, the hosting entity's console interface... OR it has to happens somewhere *outside* the server itself. Think about what an attack on this system would entail: - Hacking access into the private network where all this runs. - Figuring out how to compromise access to the MQ system at the moments in time it was handling traffic to/from the server AND showing up as a legitimate subscriber to those topics. - Figuring out how to crack into an one-time pad encoded payload - something known to be computationally impossible in reasonable time for a sufficiently good key - at least until quantum cell phones are available. Is the risk zero? No. And certainly the same set of concerns have to be extended to the surrounding infrastructure (network, MQ series, key management and distribution system ...) But the system as described above, and built with proper rigor and skill, is really, really, REALLY hard to break into, in large part because the only place where the plain data lives is in a server that has only very brief connection with anything and then only over a very narrow mechanism. My point is that the "principle of least privilege" is very much a proper construct for designing security hardened systems. So not allowing ssh on a system with a web server isn't security by obscurity. It's just limiting the attack surface ... a very reasonable decision for some applications. In general, security has to be seen as a risk management activity, not a technical one. The amount of security focus on, say, the nuclear launch codes, had jolly well be exponential greater than protecting the grocery list on your cell phone. But *if* you need great protection, reduction of access is entirely legit. The truth is that the single greatest weakness in the design above has nothing to do with the technology at all. It has to do with the recipient of the report generated by our mythical server. If that recipient is a person, the risk is that they will "leak" the report outside the organization in a stupid or malevolent manner. THAT is what Data Loss Prevention systems are supposedly addressing (often poorly in my experience). Most companies try to materially reduce this particular threat by turning off USB access on laptops, eliminating any form of remote access outside their own networks, dividing their networks into separate, hardened subnets, doing deep scans and audits on email traffic, and so forth. And yet, even when done with almost infinite money and endless security paranoia, this remains one of the most intractable problems in information security. Two words: Edward Snowden -- ---------------------------------------------------------------------------- Tim Daneliuk tundra@tundraware.com PGP Key: http://www.tundraware.com/PGP/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?97fd6d35-ef35-8583-5ef2-3ea761c36c12>