Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 4 Apr 95 17:57:58 MDT
From:      terry@cs.weber.edu (Terry Lambert)
To:        nate@trout.sri.MT.net (Nate Williams)
Cc:        rgrimes@gndrsh.aac.dev.com, freebsd-hackers@freefall.cdrom.com
Subject:   Re: new install(1) utility
Message-ID:  <9504042358.AA22545@cs.weber.edu>
In-Reply-To: <199504042329.RAA08021@trout.sri.MT.net> from "Nate Williams" at Apr 4, 95 05:29:41 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> > ${DESTDIR}${BINDIR}${PROG}: ${.OBJDIR}/${PROG}
> > 	install ${COPY} ${STRIP} -m ${BINMODE} -o ${BINOWN} -g ${GINGRP} \
> > 		${PROG} ${DESTDIR}${BINDIR}
> > 
> > install:	${DESTDIR}${BINDIR}${PROG}
> 
> Ahh, but what if ${DESTDIR}${BINDIR}${PROG} was older than
> ${.OBJDIR}/${PROG} simply because it was deleted during a purge of
> /usr/obj. My arguement is that it doesn't *need* to be installed
> (especially in the case of libraries).

You could argue that it was then a mistake to rebuild the binary,
since the generated binary that already exists is newer than the
source files from which it is derived... and you have to admit
that the build is going to take you a hell of a lot more time than
a useless install for binaries, and your point is not valid for
header file installs with Rods patch (which is only necessary because
"install" is bogusly used instead of "cp -p" in building the "finished"
include directory anyway).

This is the problem with builds that go src->obj->bin without making
the obj step contingent on a bin->src dependency check.

Not that I think that would be easily resolvable, but...

Actually, that would be a vastly superior approach anyway.  It is
highly likely that people are going to mount a CDROM, union mount
some real storage on top of that, change one or two small things,
then build.

With the expectation that the build will never take more space than
that necessary for a single binary + object files in addition to the
current installed storage, with the exception of staged build, like
the developement tools.

In other words, something like:

	cd /usr/src
	for i in bin/*
	do
		cd $i
		make install
		make clean
		cd ../..
	done

This would allow anyone with nearly zilcho disk space to rebuild their
entire system, and further for only the new things to end up being
built at all.


I don't see make files becoming non-interpreted any time soon, so it
will be nearly impossible to compute transitive closure over the
cyclic src->obj->bin graph to the level that it is desirable to do
so.  Unless you plan on rewriteing make some time soon so that it
can remain interpreted but support an alternate syntax to allow some
type of ordered staging of dependencies.


> An include file change in one file will cause all of the libraries to
> be re-compiled that depend on it, but it doesn't *necessarily* mean that
> there were any changes in the library or it's functionality.

So what you are arguing is the idempotency of include dependencies?

The fix for this is to make it so any change to an include file must
change the behaviour of the code that includes it.  In other words,
don't extern both printf and sprintf in the same include file, etc.

Otherwise, an include change must result in at least the objects
that are stated to depend on it being rebuilt.

That these objects are/aren't different from the objects already in
the library is a optimization for "ar" to make, not "install", unless
you are going to buy into incremental compilation technology (and
with the level of stupidity in the current linker -- which can't
even do inter-object type checking -- I don't see that happening).


> If the libraries don't differ at a binary level, there weren't any
> changes, so they library doesn't need to be installed.

If the object in the library didn't differ at the binary level, a new
library didn't need to be built, and if the objects in the library
are not older than the sources from which they are derived, they
don't need to be built in order to do a content comparison in the
first place (which *will* differ unless you specifically except
ident strings in any case).

All I see being saved in the case of a binary file like the library
in this example is a copy of something that was at least two orders of
magnitude more expensive to build than it would be to copy, and that
shouldn't have been built in the first place.

And with shared libraries, you can't argue that the binary needs to
be rebuilt at all unless the binary uses a deperecated interface --
which you can't tell without rebuilding the link dependency graph
using the old interfaces *which were used* vs. the new interfaces.
Which I suppose you could do at a probable higher expense than
just installing the damn thing and forgetting about it (the expense
of effectively relinking the binary statically against the new
shared library to determine whether a dynamic link would be successful
or not).


I think the header file argument is a valid one; I also think it's
better resolved than by hacking install so that it needs continuing
maintenance to keep it in sync.  I think the binary argument is
a straw man.


					Terry Lambert
					terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9504042358.AA22545>