Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 5 Apr 2008 16:30:54 -0400
From:      "Maxim Khitrov" <mkhitrov@gmail.com>
To:        "Matthew Seaman" <m.seaman@infracaninophile.co.uk>
Cc:        "Philip M. Gollucci" <pgollucci@p6m7g8.com>, freebsd-ports@freebsd.org
Subject:   Re: FreeBSD Custom Package Server
Message-ID:  <26ddd1750804051330u51ab14dei64a0b61c113a49b4@mail.gmail.com>
In-Reply-To: <47F7D92D.8060805@infracaninophile.co.uk>
References:  <26ddd1750804041811p4bb2c4f5tbab3f9659f88e8bb@mail.gmail.com> <47F7CBBD.4050107@p6m7g8.com> <26ddd1750804051234s67ba8b70l1276fe964e34ab62@mail.gmail.com> <47F7D92D.8060805@infracaninophile.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Apr 5, 2008 at 3:55 PM, Matthew Seaman
<m.seaman@infracaninophile.co.uk> wrote:
> Maxim Khitrov wrote:
>
> > A request for new package should contain in itself all the relevant
> > settings. If that means sending the make.conf file from the client to
> > the server - fine. Have the build server adapt for each new request,
> > build the requested port and dependencies, create the package(s), and
> > remove the port from the local system. The client can then download
> > the package, build server goes on to process the next request, and no
> > disk space is wasted. Am I being a bit overambitious? :)
> >
>
>  You'll need not just the data for the package you're going to build, but
>  the same data for all of the dependencies of that package, and you'll need
>  to install all of the dependencies in your build area.  How are you going
>  to handle dealing with OPTIONS screens, not just for the target package
>  but for its dependencies? Especially when changing the OPTIONS will likely
>  change the dependency graph.  Not an insurmountable problem, but not
> trivial
>  either.
>
>
>
>         Cheers,
>
>         Matthew
>

I've given this some thought. On one hand, I'd like to eliminate the
need to store the ports tree on all the client machines, but I think
I'd settle for a solution where the tree is still needed to create the
initial request.

If so, you would run 'make config-recursive' on the client machine
before sending the request (this step could be automated too). The
configuration of the port and its dependencies are done locally, and
the relevant files in /var/db/ports and then sent to the server along
with the request.

For ports that do not use OPTIONS, I could integrate the system with
ports-mgmt/portconf. If we define a build request as consisting of the
port name, CPUTYPE, CFLAGS, CXXFLAGS, /usr/local/etc/ports.conf, and
all the relevant files in /var/db/ports, then this information would
be sufficient for 95% of the ports out there (I'm guessing) to create
custom packages. I realize that some people use make.conf to specify
port-specific knobs in 'if' statements, and that would be more
difficult to deal with. It may have to be a policy thing that states
that if you want to specify port-specific options, use portconf for
that. Extracting this information from make.conf would be a bit more
difficult if conditionals are involved.

The server would receive all this information, copy over the
client-specified ports.conf file as well as all the options files.
CPUTYPE and the rest can be set via the command-line to make, and
make.conf isn't really necessary at that point. You are quite correct
about having to install all the dependencies in the build area, but
this is not as bad as it sounds.

Suppose I have the build server create me a custom package for
python25 port. The first time around, this port and all of its
dependencies are built, installed, and packaged. The server then
performs an uninstall and makes the resulting packages available to
the client for download. Python then gets updated, but suppose that
all of its dependencies remain unmodified. The software would catch
the fact that the build options also haven't changed (in a similar way
that ccache does it), and the only change is python itself. It would
then use the cached dependency packages to reinstall those ports,
build only the new python version, and again generate that new package
for the client.

It does have to go through this process of installing and uninstalling
packages for each request, but the overhead in time is not that
significant compared to a recompilation, and you have huge savings in
the amount of disk space that is in use on both the client and the
server. Now if the server is used by so many different clients that
it's not able to cache all of the custom packages, then yes, it would
have to rebuild some dependencies as well. This is just a typical
trade-off between space and time. For home users, if I set aside a few
gigs of space for package cache, the server would be more than capable
of maintaining separate versions of python, php, or whatever else I
need, and only perform incremental builds when requested.

- Max



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?26ddd1750804051330u51ab14dei64a0b61c113a49b4>