Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 May 2016 15:25:40 +0100
From:      RW <rwmaillists@googlemail.com>
To:        ports@freebsd.org
Subject:   Re: Poudriere question
Message-ID:  <20160510152540.31793420@gumby.homeunix.com>
In-Reply-To: <b2db2a55-49ae-c4b2-ec10-5ba91b1d5056@madpilot.net>
References:  <CAGwOe2Y7HjkK_QxocycmFcKzCUBAVU-87CWqOAzp6ZMUaJMbkA@mail.gmail.com> <3557cbcd-3992-5db5-c5dc-7912508e1956@madpilot.net> <20160510123517.2107653b@gumby.homeunix.com> <b2db2a55-49ae-c4b2-ec10-5ba91b1d5056@madpilot.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 10 May 2016 14:35:35 +0200
Guido Falsi wrote:

> On 05/10/16 13:35, RW via freebsd-ports wrote:
> > On Mon, 9 May 2016 20:15:12 +0200
> > Guido Falsi wrote:
> >  =20
> >> On 05/09/16 19:52, Fernando Apestegu=C3=ADa wrote: =20
> >>> Hi all,
> >>>
> >>> Is it safe to use different invocations of poudriere concurrently
> >>> for different jails but using the same ports collection?
> >>>    =20
> >>
> >> Yes it is, or at least should be.
> >>
> >> The ports trees are mounted read only in the jails, the wrkdir is
> >> defined at a different path. =20
> >=20
> > What about the distfiles directory?=20
> >=20
> > Having two "make checksums" running on the same file used to work
> > fairly well, but not any more because the target now deletes an
> > incomplete file rather than trying to resume it.
> >=20
> > This wont damage packages, but it can cause two "make checksums" to
> > get locked in a cycle of deleting each other's files and end w
> > one getting a failed checksum.  =20
>=20
> Yes it happens, I even have used the same disfiles over NFS with more
> than one machine/poudriere accessing it.
>=20
> The various instances do overwrite each other and checksums do fail
> but usually in the end one of them "wins" and the correct file ends
> up being completed, with other instances reading that one. I agree
> this happens just by chance and not due to good design.

Only the last process will terminate with a complete file and without
error, when another process runs out of retries, the file with the
directory entry is a download in progress which will fail the checksum.

If it commonly ends-up working in poudriere that's probably a property
of how  poudriere orders things. But you still have the problem of
wasted time and bandwidth. This problem is most likely with large
distfiles and there's at least one that's 1 GB.


The way this used to work is that the second process would try to
resume the download which presumably  involved getting a lock on the
file. For smaller files it would just work. Worst case was that the
second process would fail after a timeout.

I think the change came in to delete possible re-rolled distfiles
automatically (a relatively minor problem), but in the process it
created this problem and also broke resuming downloads.=20

I don't see the reason for checking and deleting the file before
attempting to resume it.



> As far as I understand Unix Filesystem semantics each download
> actually creates a new file, with only the last one to start
> referencing the actual file visible on the filesystem. So the last
> one starting to download is the one which will "win" creating the
> correct file on the FS, then checksumming it and going on. The other
> files have actuay been deleted and are simply removed from disk as
> soon as the download ends, if at that point the "winning" one has
> finished the download, they will checksum that file.
>=20
> There is a chance of the loosing download to end before the winning
> one ends and overwriting it again, but in my experience with at most
> 3-4 instances over NFS it usually fixes itself in the long run.
>=20
> IMHO best solution is to make sure you already have distfiles on disk
> for what you are going to build.
>=20



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160510152540.31793420>