Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 31 Oct 2001 11:14:24 +0100
From:      "Anthony Atkielski" <anthony@atkielski.com>
To:        <questions@FreeBSD.ORG>
Subject:   Re: Tiny starter configuration for FreeBSD
Message-ID:  <004c01c161f4$d22bb9c0$0a00000a@atkielski.com>
References:  <005a01c161ed$a19933c0$1401a8c0@tedm.placo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Ted writes:

> UNIX has no parallel to the atrocious 3rd party
> DLL management under NT nor does it have a parallel
> to a unified config file for all applications.  Both
> of these design disasters are in my opinion
> responsible for most of the reports of instability
> in Windows NT that occurs when changing things.

Sort of.  My experience has been that most NT _crashes_ are due to bug-laden,
trusted drivers (and I note that all of the ones that have given me trouble have
been non-MS drivers--MS seems to at least know how to write stable drivers,
although admittedly it doesn't write very many drivers itself).  However, it is
true that when it comes to just messing the system up (short of a crash),
registry and DLL conflicts are the usual cause.  Few vendors have installation
programs that adequately check before blasting DLLs, and some
products--including a number of Microsoft products--either blast registry
entries or create five different flavors of the same thing in the registry, such
that you don't know which tree of values really is the one that currently
governs your configuration.

> While by default NT is configured insecure, current
> UNIX versions are not configured insecure by default.

Yes, although UNIX is such an insecure system by nature that this is not saying
much.

> Anyway, the point is that obtaining security
> certification by removing the floppy and network
> adapter is dishonest.  A server is unusable without a
> network adapter.

This is one reason why those certifications are no longer a big deal.

> Code Red and Nimda proved that 99.99% of NT admins
> DID NOT do this.

I think it safe to say that the average NT admin is far less sophisticated than
the average UNIX admin (and is also paid less, according to what I've read,
which makes sense).  And a fair number of NT admins are probably quite clueless,
since NT is friendly enough that it can be installed by someone who really has
no idea what he is doing--whereas anyone that dim would have a hard time getting
 completely through a UNIX installation.

> Even today, months afterword, people are still seeing
> thousands of code red scans a day, so there's still a
> large group of NT admins out there that are still
> clueless and causing problems.

As long as my system is safe, I don't worry about the unwashed masses.

> At some point, Microsoft has to take some of the
> culpability for selling a holey OS to clueless masses.

I disagree.  The clueless will always be vulnerable--that's why they are called
clueless.  The only criticism I'd have of Microsoft is that too much of the
system is hidden from the user, making it difficult for even a savvy admin to
find and plug holes.  There is too much going on behind the scenes in an NT
system, and sometimes you get rude surprises when you find out that magic
service XYZ has been opening your system to the world for the past six months.

> Properly configured NT shares and Samba shares are
> no less secure than FTP access.

Probably, but I still want the environments separate.  One system is for my
"UNIX world" activity, and the other is for my "Windows world" activity.  I
don't plan to make them friendly with each other unless I'm being paid for it!
I actually think that products like SecureFX or WS-FTP are just fine for my
purposes.

> First of all, TCP/IP IS inefficient on a LAN compared
> to a lot of simpler protocols like NetBIOS or IPX.

So you get 1900 KB/sec instead of 1500 KB/sec.  If all you need is 20 KB/sec,
this is not a big issue.

> Today of course with 10Mbt and 100Mbt LANS this isn't
> a concern.

Yup.

> But it sure was a concern on ancient crap like Arcnet
> which is why Novell designed IPX.

I always hated IPX.  Fortunately I don't have to deal with ancient crap anymore
(at least not on my own time).

> And, what "fancier" ones are you talking about?

ATM comes to mind, although I don't know much about it.

> According to Microsoft, the software IS broken,
> that is why a patch was released.

It's broken if you are using the part that needs to be patched.  But if the
patch addresses a problem that you never encounter, it's better not to apply it,
from a stability standpoint.  I've tried both the patch-as-needed, and
patch-it-all approaches, and I find that the former requires less effort and
creates fewer headaches.  When you patch to correct problems that you've never
encountered, not only are you expending extra effort, but you risk breaking
something that previously worked.

> Sorry, but OS/2 was just as advanced, in fact more
> so than NT in a lot of ways.

OS/2 was built to look like MS-DOS, which doomed it, and that was a major design
flaw.

> The "mainframes" that these developers were previously
> designing for had CPU's that were less powerful than
> a 14.4K modems and lacked features that are
> taken for granted on PC CPU's.

The slower the processor, the better you have to be in order to write an
efficient operating system for it.

But as any mainframe veteran knows, there's a lot more to the mainframe/micro
distinction than just processor speed.

> I don't know why the word "mainframe" has such an impression
> on you, the CPU architecture of the 386 was lightyears
> ahead of anything that DEC had in a production mainframe.

DEC never built mainframes, only minicomputers.

> In fact the only significant operational difference
> between a mainframe like a VAX and a 80386 is that
> the VAX had great I/O, and could support hundreds of
> terminals attached to it.

That difference is the sine qua non for many applications.

> But, significantly, NT Server had piss-poor I/O and
> was not multiuser, in short most of the items that made
> a mainframe different than a PC were not implemented
> in NT.

Agreed.  NT is not a mainframe OS, but it borrows much of the architecture of
mainframe operating systems, to its credit.

> I don't know why it is that you think that these
> Digital designers took all this experience and used
> it to design NT, because NT is mostly unlike what
> was going on in VMS and UNIX both of which these
> Digital designers were working on.

There are many general principles to mainframe OS design that cross vendor
boundaries.  You can see differences just by looking at the NT and Windows 9x
source code.

> No that has nothing to do with it.

It has everything to do with it.  Current incarnations of Windows go through
hundreds of millions of instructions to do what olden-day PCs did in a few
thousand instructions.  It takes a lot of horsepower to antialias fonts and
paint pixels and animate buttons.  In fact, it takes way more horsepower to do
that than it does to accomplish the tasks that nominally justify the machine,
such as word processing, accounting, and the like.

> The reason it takes just as long to get anything done
> is that humans (who actually are the ones that do anything)
> have not increased in speed tenfold.

Then why does UNIX perform so much better on a given hardware configuration than
NT?  It's harder for human beings to use.  However, it contains a lot less code,
and less code is executed for tasks of similar net utility.

> But you still cannot type up a document faster in
> WordStar for DOS running on a Pentum than Microsoft Word
> for Windows running on that same system.

Typing isn't the issue.  It's when you click on a button to have the computer do
something for you that is an issue.  My Windows system, for example, does more
disk I/O in thirty seconds than an equivalent UNIX system is likely to do all
day.  The overhead to Windows is staggering.

> This is only true if the system is not connected to
> a network, which most systems these days are.

Network connections have no effect on this.

> You may have no interest in changing anything, but
> the world will force you to change.

Not so.  I know of companies that are still running software from the 1960s on
their systems, in production.

> The world sends you new file format documents which
> you want to read so you have to upgrade, the world sends
> you viruses which you must protect against, the world
> sends you trojans and worms which you must patch,
> and often upgrade, to protect against.

This is only partially true for PCs and desktops, and not true at all for other
types of systems.

Most people who have grown up with PCs have been brainwashed into thinking that
perpetual upgrades and updates are normal and mandatory.  But just as you don't
buy a new washing machine every three months (I hope!), you don't need to buy a
new PC or replace your OS or applications every few months, either.  If they do
what you want, no changes are required, ever.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?004c01c161f4$d22bb9c0$0a00000a>