From owner-freebsd-current@freebsd.org Mon Mar 29 05:20:53 2021 Return-Path: Delivered-To: freebsd-current@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 06F305BC4DB for ; Mon, 29 Mar 2021 05:20:53 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-oi1-f170.google.com (mail-oi1-f170.google.com [209.85.167.170]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4F81BX2G1Pz4c7y; Mon, 29 Mar 2021 05:20:51 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-oi1-f170.google.com with SMTP id x2so12054432oiv.2; Sun, 28 Mar 2021 22:20:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qfnSRN7LucVn1kpjAMuOQ+PHc0j5FqlwihyZ2yiAXzw=; b=YpMViZsZzmDJj8vSHoquM5FdUO9GfowFg9q1qODoY0bUG60G+uRcIQ35tR7e5iqFET 5xwOnrzAfn08F2HDRwLn1+a5Phe4dVlgzXeRzRNe0c88gLkrXpjADskbVj4QcjpKsHZE tvv/+NZyFIiEvqxIds3wmdhMX34ZWV5fygDXQe2/plLUTjCGr2jWU6xr1ib9diKE0ZRx MmpszWeBLFoETQsaIMxfdJ2QTKKcs59UjlaonPDVhUTacL+8fqblLW5iOMkrxXtD1Ss/ AvrLEaUEmGRiVwDE4eMc4WTrsj9U2zkVJ9hgn1rQUpqEDjQ4fVDo/dh+0BNDDirtDvw6 ArAw== X-Gm-Message-State: AOAM533R1hIEELv3uLzzCcvRs6VQxngVR7xCYDmlby/aTKHcaIIAwXZp d9a+Z5idT2JFSYkunR5yLWo57uNPj4VJOICi7JL5iTSW X-Google-Smtp-Source: ABdhPJyZ7xK/gEI1VGPRzRRZkn7KMJWFh3MgsMO+5cmIo7q5TDBVrtSreV/cal0bcJT5h6uaiVDzkUza9BK+RIbCHx0= X-Received: by 2002:aca:3046:: with SMTP id w67mr16947341oiw.57.1616995250416; Sun, 28 Mar 2021 22:20:50 -0700 (PDT) MIME-Version: 1.0 References: <2766233C-3CC7-4B02-95AB-7555A60FFD81@samsco.org> <9c49b2b1-6c49-410b-fbb2-a3b73618b01c@gmail.com> <20210327050411.GS15485@mailx.dglawrence.com> <5f4c94c2-df08-60ca-4dfc-31364fb444be@madpilot.net> <1b3605eb-388e-283b-d907-67058ea62c0a@madpilot.net> <20210329013713.GA52047@mailx.dglawrence.com> In-Reply-To: From: Alan Somers Date: Sun, 28 Mar 2021 23:20:39 -0600 Message-ID: Subject: Re: 13.0 RC4 might be delayed To: Gleb Popov Cc: David G Lawrence , FreeBSD Current X-Rspamd-Queue-Id: 4F81BX2G1Pz4c7y X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of asomers@gmail.com designates 209.85.167.170 as permitted sender) smtp.mailfrom=asomers@gmail.com X-Spamd-Result: default: False [-1.33 / 15.00]; ARC_NA(0.00)[]; RBL_DBL_DONT_QUERY_IPS(0.00)[209.85.167.170:from]; RCVD_COUNT_TWO(0.00)[2]; FREEFALL_USER(0.00)[asomers]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+ip4:209.85.128.0/17:c]; RCVD_TLS_ALL(0.00)[]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; DMARC_NA(0.00)[freebsd.org]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_SPAM_SHORT(0.67)[0.667]; SPAMHAUS_ZRD(0.00)[209.85.167.170:from:127.0.2.255]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[209.85.167.170:from]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FORGED_SENDER(0.30)[asomers@freebsd.org,asomers@gmail.com]; RWL_MAILSPIKE_POSSIBLE(0.00)[209.85.167.170:from]; R_DKIM_NA(0.00)[]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:209.85.128.0/17, country:US]; MIME_TRACE(0.00)[0:+,1:+,2:~]; FROM_NEQ_ENVFROM(0.00)[asomers@freebsd.org,asomers@gmail.com]; MAILMAN_DEST(0.00)[freebsd-current] Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.34 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Mar 2021 05:20:53 -0000 On Sun, Mar 28, 2021 at 10:36 PM Gleb Popov wrote: > On Mon, Mar 29, 2021 at 4:37 AM David G Lawrence via freebsd-current < > freebsd-current@freebsd.org> wrote: > > > > > On 27/03/21 06:04, David G Lawrence via freebsd-current wrote: > > > >>> On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin < > > grahamperrin@gmail.com> > > > >>> wrote: > > > >>> > > > >>>> On 26/03/2021 03:40, The Doctor via freebsd-current wrote: > > > >>>>> ??? if people are having issues with ports like ??? > > > >>>> > > > >>>> If I'm not mistaken: > > > >>>> > > > >>>> * 13.0-RC3 seems to be troublesome, as a guest machine, with > > > >>>> emulators/virtualbox-ose 6.1.18 as the host > > > >>>> > > > >>>> * no such trouble with 12.0-RELEASE-p5 as a guest. > > > >>>> > > > >>>> I hope to refine the bug report this weekend. > > > >>>> > > > >>> > > > >>> Had nothing but frequent guest lockups on 6.1.18 with my Win7 > > system. > > > >>> That > > > >>> was right after 6.1.18 was put into ports. Fell back to legacy (v5) > > and > > > >>> will try again shortly to see if it's any better. > > > >> > > > >> Kevin, > > > >> > > > >> ?????? Make sure you have these options in your /etc/sysctl.conf : > > > >> > > > >> vfs.aio.max_buf_aio=8192 > > > >> vfs.aio.max_aio_queue_per_proc=65536 > > > >> vfs.aio.max_aio_per_proc=8192 > > > >> vfs.aio.max_aio_queue=65536 > > > >> > > > >> ?????? ...otherwise the guest I/O will random hang in VirtualBox. > > This > > > >> issue was > > > >> mitigated in a late 5.x VirtualBox by patching to not use AIO, but > > the > > > >> issue > > > >> came back in 6.x when that patch wasn't carried forward. > > > > > > > > Sorry I lost that patch. Can you point me to the patch? Maybe it can > > be > > > > easily ported. > > > > > > > > > > I found the relevant commit. Please give me some time for testing and > > > I'll put this patch back in the tree. > > > > If you're going to put that patch back in, then AIO should probably be > > made an option in the port config, as shutting AIO off by default will > > have a significant performance impact. Without AIO, all guest IO will > > be become synchronous. > > > > Are you sure about that? Without AIO, VBox uses a generic POSIX backend, > which is based on pthread, I think. > We should also consider changing the defaults. vfs.aio.max_buf_aio: this is the maximum number of buffered AIO requests per process. Buffered AIO requests are only used when directing AIO to device nodes, not files, and only for devices that don't support unmapped I/O. Most devices do support unmapped I/O, including all GEOM devices. For devices that do support unmapped I/O, the number of AIO requests per process is unlimited. So this knob isn't very important. However, it is more important on powerpc and mips, where unmapped I/O isn't always possible. 16 is probably pretty reasonable for mips. vfs.aio.max_aio_queue_per_proc: this is the maximum queued aio requests per process. This applies to all AIO requests, whether to files or devices. So it ought to be large. If your program is too unsophisticated to handle EAGAIN, then it must be very large. Otherwise, a few multiples of max(vfs.aio.max_aio_per_proc, your SSD's queue depth) is probably sufficient. vfs.aio.max_aio_per_proc: this is the max number of active aio requests in the slow path (for I/O to files, or other cases like misaligned I/O to disks). Setting this too low won't cause programs to fail, but it could hurt performance. Setting it higher than vfs.aio.max_aio_procs probably won't have any benefit. vfs.aio.max_aio_queue: like max_aio_per_proc, but global instead of per-process. Doesn't need to be more than a few multiples of max_aio_per_proc. Finally, I see that emulators/virtualbox-ose's pkg-message advises checking for the AIO kernel module. That advice is obsolete. AIO is nowadays builtin to the kernel and always enabled. There is no kernel module any longer. Actually, the defaults don't look unreasonable to me, for an amd64 system with disk, file, or zvol-backed VMs. Does virtualbox properly handle EAGAIN as returned by aio_write, aio_read, and lio_listio? If not, raising these limits is a poor substitute for fixing virtualbox. If so, then I'm really curious. If anybody could tell me which limit actually solves the problem, I would like to know. -Alan