Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 4 Apr 1995 19:22:59 -0600
From:      nate@sneezy.sri.com (Nate Williams)
To:        terry@cs.weber.edu (Terry Lambert)
Cc:        nate@trout.sri.MT.net (Nate Williams), rgrimes@gndrsh.aac.dev.com, freebsd-hackers@freefall.cdrom.com
Subject:   Re: new install(1) utility
Message-ID:  <199504050122.TAA08559@trout.sri.MT.net>
In-Reply-To: <9504042358.AA22545@cs.weber.edu>
References:  <199504042329.RAA08021@trout.sri.MT.net> <9504042358.AA22545@cs.weber.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
> > > ${DESTDIR}${BINDIR}${PROG}: ${.OBJDIR}/${PROG}
> > > 	install ${COPY} ${STRIP} -m ${BINMODE} -o ${BINOWN} -g ${GINGRP} \
> > > 		${PROG} ${DESTDIR}${BINDIR}
> > > 
> > > install:	${DESTDIR}${BINDIR}${PROG}
> > 
> > Ahh, but what if ${DESTDIR}${BINDIR}${PROG} was older than
> > ${.OBJDIR}/${PROG} simply because it was deleted during a purge of
> > /usr/obj. My arguement is that it doesn't *need* to be installed
> > (especially in the case of libraries).
> 
> You could argue that it was then a mistake to rebuild the binary,
> since the generated binary that already exists is newer than the
> source files from which it is derived... 

True, but sometimes header files have changes in them which don't affect
certain binaries, but for safety sake we still must rebuild the binary
because the system has no way of knowing that.  Since the binaries
aren't any different, we shouldn't install the binary even though it has
a newer date.

> and you have to admit
> that the build is going to take you a hell of a lot more time than
> a useless install for binaries, and your point is not valid for
> header file installs with Rods patch

Rod's 'patch' is the current scheme for installing files in
/usr/include.  He quoted part of the /usr/src/include Makefile.  I'd
like to see 'install' extended to do this if we supply the appropriate
(new) command line flag.  This would clean up the Makefiles which are
hacked to work around a what I consider a deficiency in install, and
allow us to easily add this functionality to other Makefiles like the
libraries versions.

>(which is only necessary because
> "install" is bogusly used instead of "cp -p" in building the "finished"
> include directory anyway).

Hmm, I'm getting the feeling that Terry thinks 'cp -p' will solve all of
the world's problems.  It won't.  See other email.

> This is the problem with builds that go src->obj->bin without making
> the obj step contingent on a bin->src dependency check.
> 
> Not that I think that would be easily resolvable, but...

Hey, as long we want the best solution, why not ask for it all. :-)

> 	cd /usr/src
> 	for i in bin/*
> 	do
> 		cd $i
> 		make install
> 		make clean
> 		cd ../..
> 	done

Tell you what.  Go off and show me a build system that *works* and does
all these things, and when you are *ALL* done I'm sure we're all take a
look at it and go 'neato, we've got to have it'.  Until then it's all
just speculation about something that 'could be better' but is
*extremely* difficult to get right.

> > An include file change in one file will cause all of the libraries to
> > be re-compiled that depend on it, but it doesn't *necessarily* mean that
> > there were any changes in the library or it's functionality.
> 
> So what you are arguing is the idempotency of include dependencies?
> 
> The fix for this is to make it so any change to an include file must
> change the behaviour of the code that includes it.  In other words,
> don't extern both printf and sprintf in the same include file, etc.

Yeah, right.  We are *not* going to be making itty-bitty include files
which are all completely separate from each other, so that we know that
IF a file changes then it will result in a binary difference.

Every single function will require a new include file.  I can see it
now.

"Okay everyone, I'm adding a new db function.  If you want to use it
you'll need to include <db/hash/process/myfun.h>", or else you won't get
it's function prototype."

vs.

"I'm modifying <db/hash.h> to add a
new function which is used by my program and a couple others.  No other
existing programs use this function, but it will probably be more useful
in the future"

I've been in shops where both of these have occurred, and everyone
prefers the latter approach which makes more work for the CPU, but alot
less work for the programmer.  This is supposed to make our lives
easier, not more difficult.

> That these objects are/aren't different from the objects already in
> the library is a optimization for "ar" to make, not "install", unless
> you are going to buy into incremental compilation technology (and
> with the level of stupidity in the current linker -- which can't
> even do inter-object type checking -- I don't see that happening).

Reality check Terry.  We are talking about the tools we have today, not
tomorrow.  We're not going to modify every build tool in existance so
your perfect dependency world can be satisfied.

>From my point of view, we are getting way too far out in left field for
this conversation to have any relevance to reality, so I'm (once again)
bowing out.  Consider this to be my last posting on the subject.


Nate



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199504050122.TAA08559>