From owner-svn-src-all@freebsd.org Wed Jan 10 21:18:47 2018 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AC24DE7785C; Wed, 10 Jan 2018 21:18:47 +0000 (UTC) (envelope-from jhb@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 873B96D262; Wed, 10 Jan 2018 21:18:47 +0000 (UTC) (envelope-from jhb@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id AEF173045; Wed, 10 Jan 2018 21:18:46 +0000 (UTC) (envelope-from jhb@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w0ALIkUU021661; Wed, 10 Jan 2018 21:18:46 GMT (envelope-from jhb@FreeBSD.org) Received: (from jhb@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w0ALIkMi021660; Wed, 10 Jan 2018 21:18:46 GMT (envelope-from jhb@FreeBSD.org) Message-Id: <201801102118.w0ALIkMi021660@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jhb set sender to jhb@FreeBSD.org using -f From: John Baldwin Date: Wed, 10 Jan 2018 21:18:46 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r327792 - head/sys/kern X-SVN-Group: head X-SVN-Commit-Author: jhb X-SVN-Commit-Paths: head/sys/kern X-SVN-Commit-Revision: 327792 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Jan 2018 21:18:47 -0000 Author: jhb Date: Wed Jan 10 21:18:46 2018 New Revision: 327792 URL: https://svnweb.freebsd.org/changeset/base/327792 Log: Don't store shadow copies of per-process AIO limits. Previously the AIO subsystem would save a snapshot of the currently configured per-process limits the first time a process used AIO. The process would continue to use the snapshotted limits ignoring any changes to the global limits during the rest of its lifetime. This change removes the snapshotted values and changes the AIO code to always check the global values which can be toggled at runtime. This means an administrator can now change the effective limits of existing processes. This is more consistent with how other limits configured via sysctl work in FreeBSD. Reviewed by: asomers, kib MFC after: 2 weeks Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D13819 Modified: head/sys/kern/vfs_aio.c Modified: head/sys/kern/vfs_aio.c ============================================================================== --- head/sys/kern/vfs_aio.c Wed Jan 10 21:17:33 2018 (r327791) +++ head/sys/kern/vfs_aio.c Wed Jan 10 21:18:46 2018 (r327792) @@ -166,16 +166,16 @@ SYSCTL_INT(_vfs_aio, OID_AUTO, aiod_lifetime, CTLFLAG_ static int max_aio_per_proc = MAX_AIO_PER_PROC; SYSCTL_INT(_vfs_aio, OID_AUTO, max_aio_per_proc, CTLFLAG_RW, &max_aio_per_proc, 0, - "Maximum active aio requests per process (stored in the process)"); + "Maximum active aio requests per process"); static int max_aio_queue_per_proc = MAX_AIO_QUEUE_PER_PROC; SYSCTL_INT(_vfs_aio, OID_AUTO, max_aio_queue_per_proc, CTLFLAG_RW, &max_aio_queue_per_proc, 0, - "Maximum queued aio requests per process (stored in the process)"); + "Maximum queued aio requests per process"); static int max_buf_aio = MAX_BUF_AIO; SYSCTL_INT(_vfs_aio, OID_AUTO, max_buf_aio, CTLFLAG_RW, &max_buf_aio, 0, - "Maximum buf aio requests per process (stored in the process)"); + "Maximum buf aio requests per process"); /* * Though redundant with vfs.aio.max_aio_queue_per_proc, POSIX requires @@ -268,11 +268,8 @@ struct aioliojob { struct kaioinfo { struct mtx kaio_mtx; /* the lock to protect this struct */ int kaio_flags; /* (a) per process kaio flags */ - int kaio_maxactive_count; /* (*) maximum number of AIOs */ int kaio_active_count; /* (c) number of currently used AIOs */ - int kaio_qallowed_count; /* (*) maxiumu size of AIO queue */ int kaio_count; /* (a) size of AIO queue */ - int kaio_ballowed_count; /* (*) maximum number of buffers */ int kaio_buffer_count; /* (a) number of physio buffers */ TAILQ_HEAD(,kaiocb) kaio_all; /* (a) all AIOs in a process */ TAILQ_HEAD(,kaiocb) kaio_done; /* (a) done queue for process */ @@ -445,11 +442,8 @@ aio_init_aioinfo(struct proc *p) ki = uma_zalloc(kaio_zone, M_WAITOK); mtx_init(&ki->kaio_mtx, "aiomtx", NULL, MTX_DEF | MTX_NEW); ki->kaio_flags = 0; - ki->kaio_maxactive_count = max_aio_per_proc; ki->kaio_active_count = 0; - ki->kaio_qallowed_count = max_aio_queue_per_proc; ki->kaio_count = 0; - ki->kaio_ballowed_count = max_buf_aio; ki->kaio_buffer_count = 0; TAILQ_INIT(&ki->kaio_all); TAILQ_INIT(&ki->kaio_done); @@ -708,7 +702,7 @@ restart: userp = job->userproc; ki = userp->p_aioinfo; - if (ki->kaio_active_count < ki->kaio_maxactive_count) { + if (ki->kaio_active_count < max_aio_per_proc) { TAILQ_REMOVE(&aio_jobs, job, list); if (!aio_clear_cancel_function(job)) goto restart; @@ -1270,7 +1264,7 @@ aio_qphysio(struct proc *p, struct kaiocb *job) error = -1; goto unref; } - if (ki->kaio_buffer_count >= ki->kaio_ballowed_count) { + if (ki->kaio_buffer_count >= max_buf_aio) { error = EAGAIN; goto unref; } @@ -1479,7 +1473,7 @@ aio_aqueue(struct thread *td, struct aiocb *ujob, stru ops->store_kernelinfo(ujob, -1); if (num_queue_count >= max_queue_count || - ki->kaio_count >= ki->kaio_qallowed_count) { + ki->kaio_count >= max_aio_queue_per_proc) { ops->store_error(ujob, EAGAIN); return (EAGAIN); } @@ -1774,8 +1768,7 @@ aio_kick_nowait(struct proc *userp) aiop->aioprocflags &= ~AIOP_FREE; wakeup(aiop->aioproc); } else if (num_aio_resv_start + num_aio_procs < max_aio_procs && - ki->kaio_active_count + num_aio_resv_start < - ki->kaio_maxactive_count) { + ki->kaio_active_count + num_aio_resv_start < max_aio_per_proc) { taskqueue_enqueue(taskqueue_aiod_kick, &ki->kaio_task); } } @@ -1794,8 +1787,7 @@ retryproc: aiop->aioprocflags &= ~AIOP_FREE; wakeup(aiop->aioproc); } else if (num_aio_resv_start + num_aio_procs < max_aio_procs && - ki->kaio_active_count + num_aio_resv_start < - ki->kaio_maxactive_count) { + ki->kaio_active_count + num_aio_resv_start < max_aio_per_proc) { num_aio_resv_start++; mtx_unlock(&aio_job_mtx); error = aio_newproc(&num_aio_resv_start);