Date: Wed, 28 Nov 2018 16:43:28 +0200 From: Konstantin Belousov <kostikbel@gmail.com> To: Willem Jan Withagen <wjw@digiware.nl> Cc: cem@freebsd.org, "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org> Subject: Re: setting distinct core file names Message-ID: <20181128144328.GF2378@kib.kiev.ua> In-Reply-To: <ba6d919c-4a36-c1ca-8e93-c239269a8cbc@digiware.nl> References: <84f498ff-3d65-cd4e-1ff5-74c2e8f41f2e@digiware.nl> <CAG6CVpVXsbPCTAxu9j7t8_i17uP_55W9a_NuLzyNCGS=qo5C7A@mail.gmail.com> <7b2b134c-3fd3-6212-b06a-81003361e083@digiware.nl> <ba6d919c-4a36-c1ca-8e93-c239269a8cbc@digiware.nl>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Nov 28, 2018 at 12:21:33PM +0100, Willem Jan Withagen wrote: > On 28-11-2018 11:43, Willem Jan Withagen wrote: > > On 27-11-2018 21:46, Conrad Meyer wrote: > >> One (ugly) trick is to use multiple filesystem links to the script > >> interpreter, where the link names distinguish the scripts. E.g., > >> > >> $ ln /bin/sh /libexec/my_script_one_sh > >> $ ln /bin/sh /libexec/my_script_two_sh > >> $ cat myscript1.sh > >> #!/libexec/my_script_one_sh > >> ... > >> > >> Cores will be dumped with %N of "my_script_one_sh." > > > > Neat trick... got to try and remember this. > > But it is not the shell scripts that are crashing... > > > > When running Ceph tests during Jenkins building some > > programs/executables intentionally crash leaving cores. > > Others (scripts) use some of these programs with correct input and > > should NOT crash. And test during startup and termination that there are > > no cores left. > > > > One jenkins test run takes about 4 hours when not executed in parallel. > > I'm testing 4 version multiple times a day to not have this huge list of > > PRs the go thru when testing fails. > > > > But the intentional cores and the failure cores here collide. > > And when I have a core program_x.core I can't tell if they are from a > > failure or from an intentional crash. > > > > Now if could tell per program how to name its core that would allow me > > to fix the problem, without overturning the complete Ceph testing > > infrastructure and still keep parallel tests. > > > > It would also help in that "regular" cores just keep going the way the > > are. So other application still have the same behaviour. And are still > > picked up by periodic processing. > > So I read a bit more about the prcctl and prctl(the Linux variant) and > turns out that Linux can set PR_SET_DUMPABLE. And that is actually used > in some of the Ceph applications... > > Being able to set this to 0 or 1 would perhaps be a nice start as well. Isn't setrlimit(RLIMIT_CORE, 0) enough ? It is slightly different syntax, but the idea is that you set RLIMIT_CORE to zero, then we do not even start dumping.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20181128144328.GF2378>