From nobody Sun Dec 3 08:04:51 2023 X-Original-To: bugs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4SjfTv5rW9z52VTQ for ; Sun, 3 Dec 2023 08:04:51 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4SjfTv3btnz3MTW for ; Sun, 3 Dec 2023 08:04:51 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1701590691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3V3gDOLpUSk4q52xTqTXX/dN5i2XkL2zkRqiZAEKuc4=; b=AO5MlhmANFjlHfDdtAwIXV1yI0gcWToSXC81QTC7hewdG5c2lrPhk40CFVX5Q/vJZFx3oo wIR8rDqTD1DimRkxxbJ7jXIdKn3s+kzDgAeawKMFUQHZp7uu/qf5F8gmhg3eOk/gGFCBX9 3TFtHvkVafCg6Q5ocUsF99ILB5/DbJB8eXTFuJXuxs/6jP30uRw97M/J3ZP+oKYzaR0u1n F71JRl8kXg2f887wWGZHJZWE5PmO8Fstx8mCxmMeyskolQEICkbuyCH/6CmdZcUuTV/fPA aDx/VIr52PvXFCpCsrkoHy3/3CmgS5fNr3TMzK068eRcgNKWYPKRYsyIHMfAEg== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1701590691; a=rsa-sha256; cv=none; b=FRySCuW+3gQ8COnzySjod62VU2W0s3vqO6isGjBsZ48UQXy8VBOt/HBN9oZTGjfcZvo8Hi ZzmEnIVBmZjSqmcn9YfAgEHGlFn5U1NrXvN09ucajI3FFD3RrlDLeQHZdQDJDbUGbKKv5k UFxxbZ+rCtBEXKGDe/glXE1rUsHYlAdDeZOP7avqCzSOtqREEn5axeYHodAdFbWJ4NlIVU eGiQpOXdfheTmllnbCNSA45pSlqpks4kLxfiI6OX5lRiGzMdT2MSEYkWoOwNkZzPhDmLn1 zGEcf6TOsM0hHOQXiXXU11a8iM+HAWMO7f+x997EDkNYm3UQmkUd7RmrijfodA== Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4SjfTv2YbgzrlD for ; Sun, 3 Dec 2023 08:04:51 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 3B384pjs080199 for ; Sun, 3 Dec 2023 08:04:51 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 3B384phn080198 for bugs@FreeBSD.org; Sun, 3 Dec 2023 08:04:51 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 275436] tmpfs does not honor memory limits on writes Date: Sun, 03 Dec 2023 08:04:51 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 15.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: kib@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated List-Id: Bug reports List-Archive: https://lists.freebsd.org/archives/freebsd-bugs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-bugs@freebsd.org MIME-Version: 1.0 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D275436 --- Comment #6 from Konstantin Belousov --- VM (alomst) always ensures that there are several free pages. More, it even typically manages to free several pages in reasonable time. This is why our OOM organized in current way: - global OOM triggers when VM cannot get a free page despite existence of t= he page shortage, in all domains, for some time. It is typically triggered when kernel allocates too much unmanaged pages (not tmpfs case). - per-process OOM triggers when page fault handler needs a page and cannot allocate it after several cycles of allocation attempts. I added the second (per-process) OOM since global OOM (similar to your patc= h) was not able to handle typical situation with usermode sitting on too many dirty pages. Now that I formulated this, I think that for tmpfs a reasonable approach wo= uld be something in line of per-process OOM: try the allocation, and return ENO= SPC if it failed, with some criteria for restart. You might look at vm/vm_faul= t.c vm_fault_allocate_oom(). --=20 You are receiving this mail because: You are the assignee for the bug.=