Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 1 Dec 2020 16:17:12 +0100
From:      =?UTF-8?Q?Ulrich_Sp=C3=B6rlein?= <uqs@freebsd.org>
To:        =?UTF-8?Q?Dav=C3=AD=C3=B0_Steinn_Geirsson?= <david@isnic.is>
Cc:        Li-Wen Hsu <lwhsu@freebsd.org>, freebsd-git@freebsd.org
Subject:   Re: 504 errors from cgit-beta
Message-ID:  <CAJ9axoTwGDercocGrsMgZme-8xU7wt1CeEqHAqc5Dd-r2bPYKQ@mail.gmail.com>
In-Reply-To: <20201201095218.GC6221@mail>
References:  <20201112155659.GQ913@mail> <20201113.032709.2108746957258946268.yasu@utahime.org> <CAKBkRUxqVSccn_9KJAJZW0po-1C%2B5H5EqTPsz=rM-4=cUrOLUw@mail.gmail.com> <20201130150642.GB6221@mail> <X8VjIoVizIIrqCeE@acme.spoerlein.net> <20201201095218.GC6221@mail>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Dec 1, 2020 at 10:52 AM Dav=C3=AD=C3=B0 Steinn Geirsson <david@isni=
c.is>
wrote:

> On Mon, Nov 30, 2020 at 10:24:50PM +0100, Ulrich Sp=C3=B6rlein wrote:
> > On Mon, 2020-11-30 at 15:06:42 +0000, Dav=C3=AD=C3=B0 Steinn Geirsson w=
rote:
> > > On Fri, Nov 13, 2020 at 05:33:12PM +0800, Li-Wen Hsu wrote:
> > > > On Fri, Nov 13, 2020 at 2:28 AM Yasuhiro KIMURA <yasu@utahime.org>
> wrote:
> > > > >
> > > > > From: Dav=C3=AD=C3=B0 Steinn Geirsson <david@isnic.is>
> > > > > Subject: 504 errors from cgit-beta
> > > > > Date: Thu, 12 Nov 2020 15:56:59 +0000
> > > > >
> > > > > > We are getting frequent 504 errors when running `git fetch`
> against an
> > > > > > existing checkout of `ports.git` from
> https://cgit-beta.freebsd.org/ports.git:
> > > > > >
> > > > > > ```
> > > > > > $ git fetch cgit-beta
> > > > > > error: RPC failed; HTTP 504 curl 22 The requested URL returned
> error: 504
> > > > > > fatal: the remote end hung up unexpectedly
> > > > > > ```
> > > > >
> > > > > I experienced same error when accessing Emacs git remository with
> > > > > HTTPS. Following is bug report that I submitted to report the
> issue.
> > > > >
> > > > > https://savannah.nongnu.org/support/?110322
> > > > >
> > > > > As you can see, site administrator fixed the issue by icreasing
> > > > > `fastcgi_read_timeout` and `proxy_read_timeout` parameters of
> > > > > nginx. Since cgit-beta also uses nginx this may also fix your
> > > > > error. In my case, however, access always failed and never
> > > > > succeeded. So cause may be different from the one of my case.
> > > >
> > > > Thanks, I have checked this, indeed some requests' handlers don't
> have
> > > > a long enough timeout setting and I've relaxed them.  Hope this
> solves
> > > > some people's issues. Please check it again, and if it still fails
> for
> > > > you, we might need to have more information to debug.
> > >
> > > This problem disappeared after your changes, but as of this weekend
> seems
> > > to be happening again:
> > >
> > > user@ssh:~/foo/ports$ git fetch -v cgit-beta
> > > POST git-upload-pack (gzip 3272 to 1703 bytes)
> > > POST git-upload-pack (gzip 2577 to 1354 bytes)
> > > error: RPC failed; HTTP 504 curl 22 The requested URL returned error:
> 504 Gateway Time-out
> > > fatal: the remote end hung up unexpectedly
> > >
> > > Is it possible some web server config got overwritten during the last
> > > batch of changes?
> >
> > This is most definitely fallout from the commit hashes changing. That
> means
> > your client will upload basically all the hashes or packs for the serve=
r
> to
> > compare what it does and does not have.
> >
> > What is your up/downstream bandwidth situation like? Could you try some
> more
> > tracing as outlined here:
> https://stackoverflow.com/questions/27442134/git-fetch-hangs-on-git-uploa=
d-pack
> > What sort of custom work do you have in there (branches, etc)? I'm
> curious
> > to find out a way to reset this non-destructively ... and I have an ide=
a.
>
> Up/downstream should be good. Speedtests show ~100-160Mbit/s in both
> directions. Cloning a repo from cgit-beta.freebsd.org I see 7.75 MiB/s.
>
> The checkout I was working from had two branches: `upstream` which is
> a 1:1 clone of the state of the `main` branch on cgit-beta, and `main`
> which is the same but also has a couple of local ports in commits that
> get rebased on top of `upstream` when it's updated. When this error
> occurred I was on the `upstream` branch.
>
> This was a manual test, but normally the same update-then-rebase process
> happens as part of a CI job which was also failing.
>
> It seems this is fixed now, as the last 4 runs of the CI job were
> successful
> (first successful run was at 18:42 UTC). If I see a similar error again
> I'll
> follow the linked steps and send a more detailed trace.
>
>
I saw that we had tons of loose objects in src and doc (but not in ports)
and I gc'ed them today around UTC 9:00 or so. Maybe ports did auto-gc
yesterday?

I have some fixes to how we push things into the repo that might (or might
not)
 reduce the number of loose objects we end up with. I'm puzzled that doc of
all
places would result in loose objects. For src this is expected due to the
elaborate
re-writes I'm doing post-conversion.

Hmmm



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ9axoTwGDercocGrsMgZme-8xU7wt1CeEqHAqc5Dd-r2bPYKQ>