Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 02 Oct 1997 00:21:11 +0930
From:      Mike Smith <mike@smith.net.au>
To:        Jeremy Lea <reg@shale.csir.co.za>
Cc:        config@freebsd.org, chat@freebsd.org
Subject:   Re: Security and authentication 
Message-ID:  <199710011451.AAA00561@word.smith.net.au>
In-Reply-To: Your message of "Wed, 01 Oct 1997 15:21:32 %2B0200." <19971001152131.36386@shale.csir.co.za> 

next in thread | previous in thread | raw e-mail | index | archive | help

I've cc'd this message to -chat to catch the interested players.  
Please move to -config however, as this is where this discussion 
belongs.

> The thread drift on this topic is horrible. *Please* people, can we try and
> say on one topic. In six generations of messages this has moved from "I want
> a GUI to configure DNS" to "generic package installation tools".

This is because the issues at hand are extremely broad and general.  
Attempting to maintain such a narrow focus is pointless, particularly 
when it is necessary to look at a larger picture.

> To provide some focus, this is how I see things:
> 
> A system has a number of administrative phases:
>  - Initial installation/configuration
>  - Package/user/hardware installation/configuration
>  - Configuration tweaking

These are not particularly distinct.  They represent differing degrees 
of a single process, that of "administrative control".  They can be 
furthe characterised into a remarkably small set of actions, which vary 
only in syntax and semantic content.  It is, IMHO, an error to attempt 
to view these as separate processes.

> These require a authentication and security layer to ensure that all actions
> can be taken. It is this layer I want to deal with. If you want to deal with
> anything else in the above list *change the subject line*.

Apologies for the above aside.  This is indeed a vital issue.

> Problem statement:
> 
> "How do you verify that user X can perform administrative function Y on
> machine Z."

Minor nit: "is permitted to", not "can".  "Can" is a different problem 
altogether 8)

> We have three players here:
> 
> X - A person/entity. Could be a user of the system, could be some other user
>     with power over the system, could be someone supplying software (like
>     FreeBSD).
>
> Y - An administrative function. A meta-function which changes the way that
>     the system performs it's real functions. This could range from a
>     disklabelling to changing the icon enlightenment uses to indicate the
>     GIMP ;).
> 
> Z - A computer system. Hardware, running kernel and processes.
> 
> Current status:
> 
> Rights are based on user:group combinations and file access rights. Users
> can change any administrative settings they want if they own the
> configuration files, or have group access rights to these files. Users with
> access to a special uid 0 (root) can access everything and change all of the
> files. User X must have an account on machine Z to change Y.
> 
> [NIS and company should allow network based rather than machine based logins
> as I understand them? I also just pulled the XSSO spec from your web page to
> have a look at that...]

It's still effectively the same model; the rights for X to apply Y to Z 
are determined by permissions attached to the objects that are the 
immediate target of Y.  I agree with your implied assessment that this 
is a poor association between Y and selectors for a valid X; the 
association should be between a logical rather than physical group of Y 
and a given X.
 
> Problems:
> 
> 1. Coarse grained implementation. There are really only three levels of
>    security: root, group, user. In practice root access is required for most
>    non-trivial administrative tasks. User X must have a uid on machine Z.
>    They might also have to know the password for root, and belong to
>    specific group W. Outside users/entities cannot be trusted.

As above.
 
> 2. Network security. Can the pipe to the machine (of whatever form) be
>    trusted. Normally no, it can be sniffed, snooped, spoofed and all sorts
>    of other nasty things.
>
> 3. Configuration for all of the functions related to Y is normally granted
>    by providing access to Y. User X is trusted to not play with other
>    functions.

As above again.
 
> [Please list any other problems...]
>  
> Ideal solution:
> 
> Machine Z has some fine grained method of determining if the function Y is
> permitted by user X, without relying on the security of the channel.

This omits the guarantee that the action Y received by Z is actually 
the same action requested by X.  Without channel security, it is 
difficult to see how this can be achieved.

> Proposal:
> 
> User X and machine Z are the only two fixed entities in this puzzle.
> Function Y is very variable. Therefore, anchor X and Z by giving them a
> fixed, unchangeable identity. Do this through providing each with a
> verifiable signature. This is provided the first time that user X becomes
> known, and upon the initial installation of machine Z. Organise these
> signatures into webs of trust. Trust no one but yourself.
> 
> Machine Z is a dumb thing, it can be tortured into revealing it's signature.
> User X will put up a fight (three to four days of being chained to a
> Windows95 machine ;), therefore assume X can always be trusted, but make
> sure that Z hasn't been won over.
>
> When you create Z's identity, make the creator (ie the machines owner) sign
> it's identify, to verify it is real, and make Z check it's identity at any
> point where it may have been changed. Let X do their own thing, by
> encrypting their identity.
> 
> Z keeps a record of all functions Y (either in plain text files, or what
> have you) and the functions are grouped and have a hierarchy of access
> inheritance, like say oh, a file system. Access is only granted to one user
> (say A).
> 
> User X submits a change Y to Z, via a channel of some form or by editing the
> file directly. Z looks and sees that X is not A who is the only user having
> rights. So Z asks the question "Does A trust X to make this change?". Z
> looks to see if A has signed X's identity, and if so, performs the change. A
> is only trusted because A's identity is signed by Z.

(Omit the "by editing the file" component.  There's no way that this 
 model belongs inside the traditional operational space of the system; 
 if an administrator wants to fly outside we should try to work with it, 
 but certainly not to that level.)
 
> Machine Z, might also not maintain it's own access hierarchy. It may trust
> another machine (like it's NIS master server S) to provide it with a set of
> trusted relationships. It would then trust any identity signed by S, because
> S had signed it's identity.
> 
> You could also implement groups, by having Z sign group identity B and
> having B sign X's key. This would give you finer control over access.

Pardon my relative naivety, but how does the "web of trust" win you 
anything substantial over enumerating the traversals of the web and 
simply verifying the identity of the endpoints?

You can devolve the "web" you describe into a simple list of foreign 
identities and their granted rights.  If you make the ability to grant 
rights a right in itself, you achieve the "signing" process.

All that is required then is a secure channel and a means of reliably 
and securely establishing the identity of the X:Z tuple.  Extra tuples 
can be obtained from S.

> OK, enough theory. Notice that I haven't mentioned anything about PGP or
> any other product or language. It is using "public key encryption", some of
> which is patented and has usage limitations. This is a model for developing
> a "web of trust" between machines and users. It is a security model. It has
> nothing to do with the channels of communication or the information being
> transfered.

Understood.  My point being that I feel that you have a hammer with 
considerable trend value (the "web of trust"), and you are hitting a 
smaller nail with it.

Trust webs are good for distributed, dynamic trust relationships.  I'm 
not sure that the complexity involved is required, most particularly in 
that unless you colocate the key objects with X all you have is an 
indirect ACL listing in a complex and inefficient format.

Am I missing something here?

> This is something that could (well I believe) could be implemented through
> the framework of Unix user:group access control and a YP/NIS like system, to
> do general authentication and access control.

I'm not convinced that access rights to apply Y to some Z should 
require any other rights at all (including right of access, ie. user/
group ID).  The ability to inherit rights from a foreign server (S 
above) is indeed desirable.

> It also could provide a mechanism whereby a PGP public key (say for an
> entity named FreeBSD, Inc ;) verified by user A was registered with machine
> Z and then anything sign by PGP with that public key was allowed A's access
> rights. Like oh, say a package.

Ie. allowing policy selection for the installation API based on secure 
identification of the package source.  Jordan and I have discussed 
this; it's something I think everyone agrees is highly desirable.

[proof-of-concept applications to practical scenarios]
> >  - Single user at home.
> 
> User installs FreeBSD, which creates machines public and private keys,
> protected by root password. User proceeds to create an account for
> themselves which automatically generates a public and private key for the
> user (protected by their password). The install also created users and
> groups (looks in /usr/src/etc), which are admin accounts and protected by an
> admin password. User logs in as admin and signs their own public key as
> group admin.

I would go further and say that even before users are created, the 
installer has obtained "god" rights to the administration system.  This 
then allows them to perform the installation and create users.  The 
more I think your model through, the more the separation between 
administration entity and login account seems to make sense.

> They're a old fashioned kinda guy and don't like these "browser" things, so
> they edit the config files with vi. The kernel looks and says: hey these
> files are owned by the admin group, does the admin group trust X? yes. admin
> has signed X's public key. I've signed admin's key, so I know that X is
> trusted, so go ahead rewrite the file. 

No.  This is outside the domain under consideration, and we shouldn't 
be distracted by this.

> >  - Server in busy ISP application.
> [What particular problem are you referring to here?]

Mostly authentication and security on a relatively vulnerable network.  
This really just requires secure communications between the client and 
server, unlike the example above which does not, although it still 
requires the admin to authenticate themselves.

> >  - Development workstation.
> 
> Power user X decides to try out application foobar from FreeBSD, Inc. He
> gets a copy of FreeBSD, Inc's PGP key (how is his problem) and signs it into
> the system with his local user key. He gets the signed package for foobar
> and does a pkg_add, which checks to see if the key is known and approved. It
> is so and it then goes through the motions of adding the package.

The installation should add a known set of trusted keys, which can be 
used to sign others suitable for inclusion. 

> >  - Large corp/tertiary network.
> 
> Well, lets take the example of a CS lab with 100 identical computers. They
> all have machine keys generated at install time, which are signed by a
> central server. They all trust the server, and you then create accounts on
> the server and all the machines implicitly trust those keys. They also all
> obey instructions from any key signed by the server, including reboots,
> installs, updated, flashing their BIOS, etc.

This introduces a number of significant questions to do with how you 
determine who the "ultimately trusted" entity is when you are 
establishing the system in the first instance.  It sounds to me that 
with the above the handover of trust must come post-install, where the 
system is directed to abandon trust in the installer and instead demand 
a signature from the central system.

I hope you're taking notes on all this.  We will expect a tidy signing 
and keymanagement API to do this.
 
> However, if you were reading carefully you would have noticed that I glossed
> over one major point in the ISP example above... How do you sign arbitrary
> data on an unspecified channel?

Specifically, you don't.  The channel has to be trusted.  What 
constitutes a "trusted" channel is dependant on the application; a 
standalone system requires nothing special, wheras in an ISP or 
corporate network something much more robust may be required.

> > By contrast, the proposed Tcl application method wins in that :
> >  - It can use any stream encryption for client/server comms (eg. ssh)
> 
> Like I said, I was only really talking about making a secure stream.

... but you're not; you're discussing verifying the rights of X at the 
other end of an arbitrary stream, and how these rights should be 
maintained and distributed.

> I hope I'm getting this accross and that this e-mail doesn't also go down
> like a ton of bricks...

Not at all.   These issues *must* be discussed to the point where 
consensus can be reached, even before we can start complaining about 
how you're not off implementing them.  I greatly appreciate your time 
and effort.

mike





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199710011451.AAA00561>