Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 1 Oct 1997 15:21:32 +0200
From:      Jeremy Lea <reg@shale.csir.co.za>
To:        Mike Smith <mike@smith.net.au>
Cc:        chat@FreeBSD.ORG
Subject:   Security and authentication
Message-ID:  <19971001152131.36386@shale.csir.co.za>
In-Reply-To: <199709301307.WAA00501@word.smith.net.au>; from Mike Smith on Tue, Sep 30, 1997 at 10:37:16PM %2B0930
References:  <19970930100711.04631@shale.csir.co.za> <199709301307.WAA00501@word.smith.net.au>

next in thread | previous in thread | raw e-mail | index | archive | help
Mike, I'm going to pretty much ignore your reply and all the others under it
and try this again...

The thread drift on this topic is horrible. *Please* people, can we try and
say on one topic. In six generations of messages this has moved from "I want
a GUI to configure DNS" to "generic package installation tools".

To provide some focus, this is how I see things:

A system has a number of administrative phases:
 - Initial installation/configuration
 - Package/user/hardware installation/configuration
 - Configuration tweaking
These require a authentication and security layer to ensure that all actions
can be taken. It is this layer I want to deal with. If you want to deal with
anything else in the above list *change the subject line*.

Problem statement:

"How do you verify that user X can perform administrative function Y on
machine Z."

We have three players here:

X - A person/entity. Could be a user of the system, could be some other user
    with power over the system, could be someone supplying software (like
    FreeBSD).

Y - An administrative function. A meta-function which changes the way that
    the system performs it's real functions. This could range from a
    disklabelling to changing the icon enlightenment uses to indicate the
    GIMP ;).

Z - A computer system. Hardware, running kernel and processes.

Current status:

Rights are based on user:group combinations and file access rights. Users
can change any administrative settings they want if they own the
configuration files, or have group access rights to these files. Users with
access to a special uid 0 (root) can access everything and change all of the
files. User X must have an account on machine Z to change Y.

[NIS and company should allow network based rather than machine based logins
as I understand them? I also just pulled the XSSO spec from your web page to
have a look at that...]

Problems:

1. Coarse grained implementation. There are really only three levels of
   security: root, group, user. In practice root access is required for most
   non-trivial administrative tasks. User X must have a uid on machine Z.
   They might also have to know the password for root, and belong to
   specific group W. Outside users/entities cannot be trusted.

2. Network security. Can the pipe to the machine (of whatever form) be
   trusted. Normally no, it can be sniffed, snooped, spoofed and all sorts
   of other nasty things.

3. Configuration for all of the functions related to Y is normally granted
   by providing access to Y. User X is trusted to not play with other
   functions.

[Please list any other problems...]
 
Ideal solution:

Machine Z has some fine grained method of determining if the function Y is
permitted by user X, without relying on the security of the channel.

Proposal:

User X and machine Z are the only two fixed entities in this puzzle.
Function Y is very variable. Therefore, anchor X and Z by giving them a
fixed, unchangeable identity. Do this through providing each with a
verifiable signature. This is provided the first time that user X becomes
known, and upon the initial installation of machine Z. Organise these
signatures into webs of trust. Trust no one but yourself.

Machine Z is a dumb thing, it can be tortured into revealing it's signature.
User X will put up a fight (three to four days of being chained to a
Windows95 machine ;), therefore assume X can always be trusted, but make
sure that Z hasn't been won over.

When you create Z's identity, make the creator (ie the machines owner) sign
it's identify, to verify it is real, and make Z check it's identity at any
point where it may have been changed. Let X do their own thing, by
encrypting their identity.

Z keeps a record of all functions Y (either in plain text files, or what
have you) and the functions are grouped and have a hierarchy of access
inheritance, like say oh, a file system. Access is only granted to one user
(say A).

User X submits a change Y to Z, via a channel of some form or by editing the
file directly. Z looks and sees that X is not A who is the only user having
rights. So Z asks the question "Does A trust X to make this change?". Z
looks to see if A has signed X's identity, and if so, performs the change. A
is only trusted because A's identity is signed by Z.

Machine Z, might also not maintain it's own access hierarchy. It may trust
another machine (like it's NIS master server S) to provide it with a set of
trusted relationships. It would then trust any identity signed by S, because
S had signed it's identity.

You could also implement groups, by having Z sign group identity B and
having B sign X's key. This would give you finer control over access.

OK, enough theory. Notice that I haven't mentioned anything about PGP or
any other product or language. It is using "public key encryption", some of
which is patented and has usage limitations. This is a model for developing
a "web of trust" between machines and users. It is a security model. It has
nothing to do with the channels of communication or the information being
transfered.

This is something that could (well I believe) could be implemented through
the framework of Unix user:group access control and a YP/NIS like system, to
do general authentication and access control.

It also could provide a mechanism whereby a PGP public key (say for an
entity named FreeBSD, Inc ;) verified by user A was registered with machine
Z and then anything sign by PGP with that public key was allowed A's access
rights. Like oh, say a package.

Now for some practical stuff.

On Tue, Sep 30, 1997 at 10:37:16PM +0930, Mike Smith wrote:
> Look; all these ideas have great technical merit, but no commonsense.  
> Stop and think for a few seconds about what actually has to be 
> achieved in order to make this model work.  Visualise how your design 
> would be used in a few different situations, e.. :
> 
>  - Single user at home.

User installs FreeBSD, which creates machines public and private keys,
protected by root password. User proceeds to create an account for
themselves which automatically generates a public and private key for the
user (protected by their password). The install also created users and
groups (looks in /usr/src/etc), which are admin accounts and protected by an
admin password. User logs in as admin and signs their own public key as
group admin.

They're a old fashioned kinda guy and don't like these "browser" things, so
they edit the config files with vi. The kernel looks and says: hey these
files are owned by the admin group, does the admin group trust X? yes. admin
has signed X's public key. I've signed admin's key, so I know that X is
trusted, so go ahead rewrite the file. 

>  - Server in busy ISP application.
[What particular problem are you referring to here?]

The ISP has ten machines, all running various servers. The four PPP servers
all trust one another and use the same password db. The three techies have
key pairs generated when their accounts are created, and then the big chief
(who knows the admin password) signs their keys with group ppp's key. They
sign their own home account's key with their PPPserver key and register
their own home key (now signed) with PPPservers. They log in remotely via
some form of channel, and their changes are trusted because they are signed
with a trusted key.

>  - Development workstation.

Power user X decides to try out application foobar from FreeBSD, Inc. He
gets a copy of FreeBSD, Inc's PGP key (how is his problem) and signs it into
the system with his local user key. He gets the signed package for foobar
and does a pkg_add, which checks to see if the key is known and approved. It
is so and it then goes through the motions of adding the package.

>  - Large corp/tertiary network.

Well, lets take the example of a CS lab with 100 identical computers. They
all have machine keys generated at install time, which are signed by a
central server. They all trust the server, and you then create accounts on
the server and all the machines implicitly trust those keys. They also all
obey instructions from any key signed by the server, including reboots,
installs, updated, flashing their BIOS, etc.

> Also consider that whatever the interface, it has to work with a 
> textmode browser (ie. lynx).

Like I said, I was only really addressing the security model.

However, if you were reading carefully you would have noticed that I glossed
over one major point in the ISP example above... How do you sign arbitrary
data on an unspecified channel? You have to have something the other side
which can sign that data. It could be signed diffs via e-mail (easy) or it
could be a two way IP connection to an all-singing java application on the
other side or it could be a connection over an SSL layer, where the user
appears to be local. A browser needs some way of doing this... Java,
plugins, ActiveX ;-), or a local httpd which implements a signed protocol in
the background, or a remote shttpd which can set it's uid based on the
connection. Probably a few dozen other ways to.

> Too complicated to be the only security model.  What's PGP?  "Fred the 
> new admin needs to to XYZ, how do we give him permission?"

Login as XYZadmin and sign his key, or pop up the user management web page,
click on Fred, click on Options and then Sign key.

> By contrast, the proposed Tcl application method wins in that :
>  - It can use any stream encryption for client/server comms (eg. ssh)

Like I said, I was only really talking about making a secure stream.

The stream would still be vulnerable to people snooping the un-encrypted
private keys... if you've got people doing that then the channel is not
where your hole is, it's in your memory management. However, the machine
cannot reasonably encrypt it's private key (which it needs - but only when
signing things), which means it is particularly vulnerable. As I mentioned
way up top, the owner/installer, signs the machines key and then the machine
can try and verify that it's key hasn't been hijacked (say the key is loaded
into the kernel at bootup and prints "Owned by: Joe Bloggs" to the console.

I hope I'm getting this accross and that this e-mail doesn't also go down
like a ton of bricks...

 -Jeremy

-- 
  |   "I could be anything I wanted to, but one things true
--+--  Never gonna be as big as Jesus, never gonna hold the world in my hand
  |    Never gonna be as big as Jesus, never gonna build a promised land
  |    But that's, that's alright, OK with me..." -Audio Adrenaline



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19971001152131.36386>