Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 3 Mar 2024 16:29:50 -0800
From:      Rick Macklem <rick.macklem@gmail.com>
To:        Garrett Wollman <wollman@bimajority.org>
Cc:        stable@freebsd.org
Subject:   Re: 13-stable NFS server hang
Message-ID:  <CAM5tNy4L90W-6UhruxyUtsjt4k0fZnQyFFrW9rgWdC3GRdXy2g@mail.gmail.com>
In-Reply-To: <CAM5tNy7Ezs2XnAiWAf2wBrROhweyfWoeq2k%2B0Qqk4KCtw25ReA@mail.gmail.com>
References:  <26078.50375.679881.64018@hergotha.csail.mit.edu> <CAM5tNy7ZZ2bVLmYnOCWzrS9wq6yudoV5JKG5ObRU0=wLt1ofZw@mail.gmail.com> <26083.64612.717082.366639@hergotha.csail.mit.edu> <CAM5tNy4BM3fwccjF53ROP-7NojsWMM2fUY2_RA-4GMWfc6Sn4g@mail.gmail.com> <CAM5tNy47qzCSCxUik3LyV=VtpYGgaLoWehP4AeJCXz0ik0JGaw@mail.gmail.com> <CAM5tNy7Ezs2XnAiWAf2wBrROhweyfWoeq2k%2B0Qqk4KCtw25ReA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Mar 3, 2024 at 4:28=E2=80=AFPM Rick Macklem <rick.macklem@gmail.com=
> wrote:
>
> On Sun, Mar 3, 2024 at 3:27=E2=80=AFPM Rick Macklem <rick.macklem@gmail.c=
om> wrote:
> >
> > On Sun, Mar 3, 2024 at 1:17=E2=80=AFPM Rick Macklem <rick.macklem@gmail=
.com> wrote:
> > >
> > > On Sat, Mar 2, 2024 at 8:28=E2=80=AFPM Garrett Wollman <wollman@bimaj=
ority.org> wrote:
> > > >
> > > >
> > > > I wrote previously:
> > > > > PID    TID COMM                TDNAME              KSTACK
> > > > > 997 108481 nfsd                nfsd: master        mi_switch slee=
pq_timedwait _sleep nfsv4_lock nfsrvd_dorpc nfssvc_program svc_run_internal=
 svc_run nfsrvd_nfsd nfssvc_nfsd sys_nfssvc amd64_syscall fast_syscall_comm=
on
> > > > > 997 960918 nfsd                nfsd: service       mi_switch slee=
pq_timedwait _sleep nfsv4_lock nfsrv_setclient nfsrvd_exchangeid nfsrvd_dor=
pc nfssvc_program svc_run_internal svc_thread_start fork_exit fork_trampoli=
ne
> > > > > 997 962232 nfsd                nfsd: service       mi_switch _cv_=
wait txg_wait_synced_impl txg_wait_synced dmu_offset_next zfs_holey zfs_fre=
ebsd_ioctl vn_generic_copy_file_range vop_stdcopy_file_range VOP_COPY_FILE_=
RANGE vn_copy_file_range nfsrvd_copy_file_range nfsrvd_dorpc nfssvc_program=
 svc_run_internal svc_thread_start fork_exit fork_trampoline
> > > >
> > > > I spent some time this evening looking at this last stack trace, an=
d
> > > > stumbled across the following comment in
> > > > sys/contrib/openzfs/module/zfs/dmu.c:
> > > >
> > > > | /*
> > > > |  * Enable/disable forcing txg sync when dirty checking for holes =
with lseek().
> > > > |  * By default this is enabled to ensure accurate hole reporting, =
it can result
> > > > |  * in a significant performance penalty for lseek(SEEK_HOLE) heav=
y workloads.
> > > > |  * Disabling this option will result in holes never being reporte=
d in dirty
> > > > |  * files which is always safe.
> > > > |  */
> > > > | int zfs_dmu_offset_next_sync =3D 1;
> > > >
> > > > I believe this explains why vn_copy_file_range sometimes takes much
> > > > longer than a second: our servers often have lots of data waiting t=
o
> > > > be written to disk, and if the file being copied was recently modif=
ied
> > > > (and so is dirty), this might take several seconds.  I've set
> > > > vfs.zfs.dmu_offset_next_sync=3D0 on the server that was hurting the=
 most
> > > > and am watching to see if we have more freezes.
> > > >
> > > > If this does the trick, then I can delay deploying a new kernel unt=
il
> > > > April, after my upcoming vacation.
> > > Interesting. Please let us know how it goes.
> > Btw, I just tried this for my trivial test and it worked very well.
> > A 1Gbyte file was cpied in two Copy RPCs of 1sec and slightly less than
> > 1sec.
> Oops, I spoke too soon.
> The Copy RPCs worked fine (as above) but the Commit RPCs took
> a long time, so it still looks like you may need the patches.
And I should mention that my test is done on a laptop without a ZIL,
so maybe a ZIL on a separate device might generate different results.

rick
>
> rick
>
> >
> > So, your vacation may be looking better, rick
> >
> > >
> > > And enjoy your vacation, rick
> > >
> > > >
> > > > -GAWollman
> > > >



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAM5tNy4L90W-6UhruxyUtsjt4k0fZnQyFFrW9rgWdC3GRdXy2g>