Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 1 Mar 2016 18:12:14 +0000 (UTC)
From:      John Baldwin <jhb@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r296277 - in head: share/man/man4 sys/conf sys/kern sys/modules sys/modules/aio sys/sys tests/sys/aio
Message-ID:  <201603011812.u21ICEqt071147@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: jhb
Date: Tue Mar  1 18:12:14 2016
New Revision: 296277
URL: https://svnweb.freebsd.org/changeset/base/296277

Log:
  Refactor the AIO subsystem to permit file-type-specific handling and
  improve cancellation robustness.
  
  Introduce a new file operation, fo_aio_queue, which is responsible for
  queueing and completing an asynchronous I/O request for a given file.
  The AIO subystem now exports library of routines to manipulate AIO
  requests as well as the ability to run a handler function in the
  "default" pool of AIO daemons to service a request.
  
  A default implementation for file types which do not include an
  fo_aio_queue method queues requests to the "default" pool invoking the
  fo_read or fo_write methods as before.
  
  The AIO subsystem permits file types to install a private "cancel"
  routine when a request is queued to permit safe dequeueing and cleanup
  of cancelled requests.
  
  Sockets now use their own pool of AIO daemons and service per-socket
  requests in FIFO order.  Socket requests will not block indefinitely
  permitting timely cancellation of all requests.
  
  Due to the now-tight coupling of the AIO subsystem with file types,
  the AIO subsystem is now a standard part of all kernels.  The VFS_AIO
  kernel option and aio.ko module are gone.
  
  Many file types may block indefinitely in their fo_read or fo_write
  callbacks resulting in a hung AIO daemon.  This can result in hung
  user processes (when processes attempt to cancel all outstanding
  requests during exit) or a hung system.  To protect against this, AIO
  requests are only permitted for known "safe" files by default.  AIO
  requests for all file types can be enabled by setting the new
  vfs.aio.enable_usafe sysctl to a non-zero value.  The AIO tests have
  been updated to skip operations on unsafe file types if the sysctl is
  zero.
  
  Currently, AIO requests on sockets and raw disks are considered safe
  and are enabled by default.  aio_mlock() is also enabled by default.
  
  Reviewed by:	cem, jilles
  Discussed with:	kib (earlier version)
  Sponsored by:	Chelsio Communications
  Differential Revision:	https://reviews.freebsd.org/D5289

Added:
  head/tests/sys/aio/local.h   (contents, props changed)
Deleted:
  head/sys/modules/aio/Makefile
Modified:
  head/share/man/man4/aio.4
  head/sys/conf/NOTES
  head/sys/conf/files
  head/sys/conf/options
  head/sys/kern/sys_socket.c
  head/sys/kern/uipc_debug.c
  head/sys/kern/uipc_sockbuf.c
  head/sys/kern/uipc_socket.c
  head/sys/kern/vfs_aio.c
  head/sys/modules/Makefile
  head/sys/sys/aio.h
  head/sys/sys/file.h
  head/sys/sys/sockbuf.h
  head/sys/sys/socketvar.h
  head/tests/sys/aio/aio_kqueue_test.c
  head/tests/sys/aio/aio_test.c
  head/tests/sys/aio/lio_kqueue_test.c

Modified: head/share/man/man4/aio.4
==============================================================================
--- head/share/man/man4/aio.4	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/share/man/man4/aio.4	Tue Mar  1 18:12:14 2016	(r296277)
@@ -27,24 +27,116 @@
 .\"
 .\" $FreeBSD$
 .\"
-.Dd October 24, 2002
+.Dd March 1, 2016
 .Dt AIO 4
 .Os
 .Sh NAME
 .Nm aio
 .Nd asynchronous I/O
-.Sh SYNOPSIS
-To link into the kernel:
-.Cd "options VFS_AIO"
-.Pp
-To load as a kernel loadable module:
-.Dl kldload aio
 .Sh DESCRIPTION
 The
 .Nm
 facility provides system calls for asynchronous I/O.
-It is available both as a kernel option for static inclusion and as a
-dynamic kernel module.
+However, asynchronous I/O operations are only enabled for certain file
+types by default.
+Asynchronous I/O operations for other file types may block an AIO daemon
+indefinitely resulting in process and/or system hangs.
+Asynchronous I/O operations can be enabled for all file types by setting
+the
+.Va vfs.aio.enable_unsafe
+sysctl node to a non-zero value.
+.Pp
+Asynchronous I/O operations on sockets and raw disk devices do not block
+indefinitely and are enabled by default.
+.Pp
+The
+.Nm
+facility uses kernel processes
+(also known as AIO daemons)
+to service most asynchronous I/O requests.
+These processes are grouped into pools containing a variable number of
+processes.
+Each pool will add or remove processes to the pool based on load.
+Pools can be configured by sysctl nodes that define the minimum
+and maximum number of processes as well as the amount of time an idle
+process will wait before exiting.
+.Pp
+One pool of AIO daemons is used to service asynchronous I/O requests for
+sockets.
+These processes are named
+.Dq soaiod<N> .
+The following sysctl nodes are used with this pool:
+.Bl -tag -width indent
+.It Va kern.ipc.aio.num_procs
+The current number of processes in the pool.
+.It Va kern.ipc.aio.target_procs
+The minimum number of processes that should be present in the pool.
+.It Va kern.ipc.aio.max_procs
+The maximum number of processes permitted in the pool.
+.It Va kern.ipc.aio.lifetime
+The amount of time a process is permitted to idle in clock ticks.
+If a process is idle for this amount of time and there are more processes
+in the pool than the target minimum,
+the process will exit.
+.El
+.Pp
+A second pool of AIO daemons is used to service all other asynchronous I/O
+requests except for I/O requests to raw disks.
+These processes are named
+.Dq aiod<N> .
+The following sysctl nodes are used with this pool:
+.Bl -tag -width indent
+.It Va vfs.aio.num_aio_procs
+The current number of processes in the pool.
+.It Va vfs.aio.target_aio_procs
+The minimum number of processes that should be present in the pool.
+.It Va vfs.aio.max_aio_procs
+The maximum number of processes permitted in the pool.
+.It Va vfs.aio.aiod_lifetime
+The amount of time a process is permitted to idle in clock ticks.
+If a process is idle for this amount of time and there are more processes
+in the pool than the target minimum,
+the process will exit.
+.El
+.Pp
+Asynchronous I/O requests for raw disks are queued directly to the disk
+device layer after temporarily wiring the user pages associated with the
+request.
+These requests are not serviced by any of the AIO daemon pools.
+.Pp
+Several limits on the number of asynchronous I/O requests are imposed both
+system-wide and per-process.
+These limits are configured via the following sysctls:
+.Bl -tag -width indent
+.It Va vfs.aio.max_buf_aio
+The maximum number of queued asynchronous I/O requests for raw disks permitted
+for a single process.
+Asynchronous I/O requests that have completed but whose status has not been
+retrieved via
+.Xr aio_return 2
+or
+.Xr aio_waitcomplete 2
+are not counted against this limit.
+.It Va vfs.aio.num_buf_aio
+The number of queued asynchronous I/O requests for raw disks system-wide.
+.It Va vfs.aio.max_aio_queue_per_proc
+The maximum number of asynchronous I/O requests for a single process
+serviced concurrently by the default AIO daemon pool.
+.It Va vfs.aio.max_aio_per_proc
+The maximum number of outstanding asynchronous I/O requests permitted for a
+single process.
+This includes requests that have not been serviced,
+requests currently being serviced,
+and requests that have completed but whose status has not been retrieved via
+.Xr aio_return 2
+or
+.Xr aio_waitcomplete 2 .
+.It Va vfs.aio.num_queue_count
+The number of outstanding asynchronous I/O requests system-wide.
+.It Va vfs.aio.max_aio_queue
+The maximum number of outstanding asynchronous I/O requests permitted
+system-wide.
+.El
 .Sh SEE ALSO
 .Xr aio_cancel 2 ,
 .Xr aio_error 2 ,
@@ -54,9 +146,7 @@ dynamic kernel module.
 .Xr aio_waitcomplete 2 ,
 .Xr aio_write 2 ,
 .Xr lio_listio 2 ,
-.Xr config 8 ,
-.Xr kldload 8 ,
-.Xr kldunload 8
+.Xr sysctl 8
 .Sh HISTORY
 The
 .Nm
@@ -66,3 +156,7 @@ The
 .Nm
 kernel module appeared in
 .Fx 5.0 .
+The
+.Nm
+facility was integrated into all kernels in
+.Fx 11.0 .

Modified: head/sys/conf/NOTES
==============================================================================
--- head/sys/conf/NOTES	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/conf/NOTES	Tue Mar  1 18:12:14 2016	(r296277)
@@ -1130,11 +1130,6 @@ options 	EXT2FS
 #
 options 	REISERFS
 
-# Use real implementations of the aio_* system calls.  There are numerous
-# stability and security issues in the current aio code that make it
-# unsuitable for inclusion on machines with untrusted local users.
-options 	VFS_AIO
-
 # Cryptographically secure random number generator; /dev/random
 device		random
 

Modified: head/sys/conf/files
==============================================================================
--- head/sys/conf/files	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/conf/files	Tue Mar  1 18:12:14 2016	(r296277)
@@ -3332,7 +3332,7 @@ kern/uipc_socket.c		standard
 kern/uipc_syscalls.c		standard
 kern/uipc_usrreq.c		standard
 kern/vfs_acl.c			standard
-kern/vfs_aio.c			optional vfs_aio
+kern/vfs_aio.c			standard
 kern/vfs_bio.c			standard
 kern/vfs_cache.c		standard
 kern/vfs_cluster.c		standard

Modified: head/sys/conf/options
==============================================================================
--- head/sys/conf/options	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/conf/options	Tue Mar  1 18:12:14 2016	(r296277)
@@ -213,7 +213,6 @@ SYSVSHM		opt_sysvipc.h
 SW_WATCHDOG	opt_watchdog.h
 TURNSTILE_PROFILING
 UMTX_PROFILING
-VFS_AIO
 VERBOSE_SYSINIT
 WLCACHE		opt_wavelan.h
 WLDEBUG		opt_wavelan.h

Modified: head/sys/kern/sys_socket.c
==============================================================================
--- head/sys/kern/sys_socket.c	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/kern/sys_socket.c	Tue Mar  1 18:12:14 2016	(r296277)
@@ -34,9 +34,12 @@ __FBSDID("$FreeBSD$");
 
 #include <sys/param.h>
 #include <sys/systm.h>
+#include <sys/aio.h>
 #include <sys/domain.h>
 #include <sys/file.h>
 #include <sys/filedesc.h>
+#include <sys/kernel.h>
+#include <sys/kthread.h>
 #include <sys/malloc.h>
 #include <sys/proc.h>
 #include <sys/protosw.h>
@@ -48,6 +51,9 @@ __FBSDID("$FreeBSD$");
 #include <sys/filio.h>			/* XXX */
 #include <sys/sockio.h>
 #include <sys/stat.h>
+#include <sys/sysctl.h>
+#include <sys/sysproto.h>
+#include <sys/taskqueue.h>
 #include <sys/uio.h>
 #include <sys/ucred.h>
 #include <sys/un.h>
@@ -64,6 +70,22 @@ __FBSDID("$FreeBSD$");
 
 #include <security/mac/mac_framework.h>
 
+#include <vm/vm.h>
+#include <vm/pmap.h>
+#include <vm/vm_extern.h>
+#include <vm/vm_map.h>
+
+static SYSCTL_NODE(_kern_ipc, OID_AUTO, aio, CTLFLAG_RD, NULL,
+    "socket AIO stats");
+
+static int empty_results;
+SYSCTL_INT(_kern_ipc_aio, OID_AUTO, empty_results, CTLFLAG_RD, &empty_results,
+    0, "socket operation returned EAGAIN");
+
+static int empty_retries;
+SYSCTL_INT(_kern_ipc_aio, OID_AUTO, empty_retries, CTLFLAG_RD, &empty_retries,
+    0, "socket operation retries");
+
 static fo_rdwr_t soo_read;
 static fo_rdwr_t soo_write;
 static fo_ioctl_t soo_ioctl;
@@ -72,6 +94,9 @@ extern fo_kqfilter_t soo_kqfilter;
 static fo_stat_t soo_stat;
 static fo_close_t soo_close;
 static fo_fill_kinfo_t soo_fill_kinfo;
+static fo_aio_queue_t soo_aio_queue;
+
+static void	soo_aio_cancel(struct kaiocb *job);
 
 struct fileops	socketops = {
 	.fo_read = soo_read,
@@ -86,6 +111,7 @@ struct fileops	socketops = {
 	.fo_chown = invfo_chown,
 	.fo_sendfile = invfo_sendfile,
 	.fo_fill_kinfo = soo_fill_kinfo,
+	.fo_aio_queue = soo_aio_queue,
 	.fo_flags = DFLAG_PASSABLE
 };
 
@@ -363,3 +389,374 @@ soo_fill_kinfo(struct file *fp, struct k
 	    sizeof(kif->kf_path));
 	return (0);	
 }
+
+static STAILQ_HEAD(, task) soaio_jobs;
+static struct mtx soaio_jobs_lock;
+static struct task soaio_kproc_task;
+static int soaio_starting, soaio_idle, soaio_queued;
+static struct unrhdr *soaio_kproc_unr;
+
+static int soaio_max_procs = MAX_AIO_PROCS;
+SYSCTL_INT(_kern_ipc_aio, OID_AUTO, max_procs, CTLFLAG_RW, &soaio_max_procs, 0,
+    "Maximum number of kernel processes to use for async socket IO");
+
+static int soaio_num_procs;
+SYSCTL_INT(_kern_ipc_aio, OID_AUTO, num_procs, CTLFLAG_RD, &soaio_num_procs, 0,
+    "Number of active kernel processes for async socket IO");
+
+static int soaio_target_procs = TARGET_AIO_PROCS;
+SYSCTL_INT(_kern_ipc_aio, OID_AUTO, target_procs, CTLFLAG_RD,
+    &soaio_target_procs, 0,
+    "Preferred number of ready kernel processes for async socket IO");
+
+static int soaio_lifetime;
+SYSCTL_INT(_kern_ipc_aio, OID_AUTO, lifetime, CTLFLAG_RW, &soaio_lifetime, 0,
+    "Maximum lifetime for idle aiod");
+
+static void
+soaio_kproc_loop(void *arg)
+{
+	struct proc *p;
+	struct vmspace *myvm;
+	struct task *task;
+	int error, id, pending;
+
+	id = (intptr_t)arg;
+
+	/*
+	 * Grab an extra reference on the daemon's vmspace so that it
+	 * doesn't get freed by jobs that switch to a different
+	 * vmspace.
+	 */
+	p = curproc;
+	myvm = vmspace_acquire_ref(p);
+
+	mtx_lock(&soaio_jobs_lock);
+	MPASS(soaio_starting > 0);
+	soaio_starting--;
+	for (;;) {
+		while (!STAILQ_EMPTY(&soaio_jobs)) {
+			task = STAILQ_FIRST(&soaio_jobs);
+			STAILQ_REMOVE_HEAD(&soaio_jobs, ta_link);
+			soaio_queued--;
+			pending = task->ta_pending;
+			task->ta_pending = 0;
+			mtx_unlock(&soaio_jobs_lock);
+
+			task->ta_func(task->ta_context, pending);
+
+			mtx_lock(&soaio_jobs_lock);
+		}
+		MPASS(soaio_queued == 0);
+
+		if (p->p_vmspace != myvm) {
+			mtx_unlock(&soaio_jobs_lock);
+			vmspace_switch_aio(myvm);
+			mtx_lock(&soaio_jobs_lock);
+			continue;
+		}
+
+		soaio_idle++;
+		error = mtx_sleep(&soaio_idle, &soaio_jobs_lock, 0, "-",
+		    soaio_lifetime);
+		soaio_idle--;
+		if (error == EWOULDBLOCK && STAILQ_EMPTY(&soaio_jobs) &&
+		    soaio_num_procs > soaio_target_procs)
+			break;
+	}
+	soaio_num_procs--;
+	mtx_unlock(&soaio_jobs_lock);
+	free_unr(soaio_kproc_unr, id);
+	kproc_exit(0);
+}
+
+static void
+soaio_kproc_create(void *context, int pending)
+{
+	struct proc *p;
+	int error, id;
+
+	mtx_lock(&soaio_jobs_lock);
+	for (;;) {
+		if (soaio_num_procs < soaio_target_procs) {
+			/* Must create */
+		} else if (soaio_num_procs >= soaio_max_procs) {
+			/*
+			 * Hit the limit on kernel processes, don't
+			 * create another one.
+			 */
+			break;
+		} else if (soaio_queued <= soaio_idle + soaio_starting) {
+			/*
+			 * No more AIO jobs waiting for a process to be
+			 * created, so stop.
+			 */
+			break;
+		}
+		soaio_starting++;
+		mtx_unlock(&soaio_jobs_lock);
+
+		id = alloc_unr(soaio_kproc_unr);
+		error = kproc_create(soaio_kproc_loop, (void *)(intptr_t)id,
+		    &p, 0, 0, "soaiod%d", id);
+		if (error != 0) {
+			free_unr(soaio_kproc_unr, id);
+			mtx_lock(&soaio_jobs_lock);
+			soaio_starting--;
+			break;
+		}
+
+		mtx_lock(&soaio_jobs_lock);
+		soaio_num_procs++;
+	}
+	mtx_unlock(&soaio_jobs_lock);
+}
+
+static void
+soaio_enqueue(struct task *task)
+{
+
+	mtx_lock(&soaio_jobs_lock);
+	MPASS(task->ta_pending == 0);
+	task->ta_pending++;
+	STAILQ_INSERT_TAIL(&soaio_jobs, task, ta_link);
+	soaio_queued++;
+	if (soaio_queued <= soaio_idle)
+		wakeup_one(&soaio_idle);
+	else if (soaio_num_procs < soaio_max_procs)
+		taskqueue_enqueue(taskqueue_thread, &soaio_kproc_task);
+	mtx_unlock(&soaio_jobs_lock);
+}
+
+static void
+soaio_init(void)
+{
+
+	soaio_lifetime = AIOD_LIFETIME_DEFAULT;
+	STAILQ_INIT(&soaio_jobs);
+	mtx_init(&soaio_jobs_lock, "soaio jobs", NULL, MTX_DEF);
+	soaio_kproc_unr = new_unrhdr(1, INT_MAX, NULL);
+	TASK_INIT(&soaio_kproc_task, 0, soaio_kproc_create, NULL);
+	if (soaio_target_procs > 0)
+		taskqueue_enqueue(taskqueue_thread, &soaio_kproc_task);
+}
+SYSINIT(soaio, SI_SUB_VFS, SI_ORDER_ANY, soaio_init, NULL);
+
+static __inline int
+soaio_ready(struct socket *so, struct sockbuf *sb)
+{
+	return (sb == &so->so_rcv ? soreadable(so) : sowriteable(so));
+}
+
+static void
+soaio_process_job(struct socket *so, struct sockbuf *sb, struct kaiocb *job)
+{
+	struct ucred *td_savedcred;
+	struct thread *td;
+	struct file *fp;
+	struct uio uio;
+	struct iovec iov;
+	size_t cnt;
+	int error, flags;
+
+	SOCKBUF_UNLOCK(sb);
+	aio_switch_vmspace(job);
+	td = curthread;
+	fp = job->fd_file;
+retry:
+	td_savedcred = td->td_ucred;
+	td->td_ucred = job->cred;
+
+	cnt = job->uaiocb.aio_nbytes;
+	iov.iov_base = (void *)(uintptr_t)job->uaiocb.aio_buf;
+	iov.iov_len = cnt;
+	uio.uio_iov = &iov;
+	uio.uio_iovcnt = 1;
+	uio.uio_offset = 0;
+	uio.uio_resid = cnt;
+	uio.uio_segflg = UIO_USERSPACE;
+	uio.uio_td = td;
+	flags = MSG_NBIO;
+
+	/* TODO: Charge ru_msg* to job. */
+
+	if (sb == &so->so_rcv) {
+		uio.uio_rw = UIO_READ;
+#ifdef MAC
+		error = mac_socket_check_receive(fp->f_cred, so);
+		if (error == 0)
+
+#endif
+			error = soreceive(so, NULL, &uio, NULL, NULL, &flags);
+	} else {
+		uio.uio_rw = UIO_WRITE;
+#ifdef MAC
+		error = mac_socket_check_send(fp->f_cred, so);
+		if (error == 0)
+#endif
+			error = sosend(so, NULL, &uio, NULL, NULL, flags, td);
+		if (error == EPIPE && (so->so_options & SO_NOSIGPIPE) == 0) {
+			PROC_LOCK(job->userproc);
+			kern_psignal(job->userproc, SIGPIPE);
+			PROC_UNLOCK(job->userproc);
+		}
+	}
+
+	cnt -= uio.uio_resid;
+	td->td_ucred = td_savedcred;
+
+	/* XXX: Not sure if this is needed? */
+	if (cnt != 0 && (error == ERESTART || error == EINTR ||
+	    error == EWOULDBLOCK))
+		error = 0;
+	if (error == EWOULDBLOCK) {
+		/*
+		 * A read() or write() on the socket raced with this
+		 * request.  If the socket is now ready, try again.
+		 * If it is not, place this request at the head of the
+		 * queue to try again when the socket is ready.
+		 */
+		SOCKBUF_LOCK(sb);		
+		empty_results++;
+		if (soaio_ready(so, sb)) {
+			empty_retries++;
+			SOCKBUF_UNLOCK(sb);
+			goto retry;
+		}
+
+		if (!aio_set_cancel_function(job, soo_aio_cancel)) {
+			MPASS(cnt == 0);
+			SOCKBUF_UNLOCK(sb);
+			aio_cancel(job);
+			SOCKBUF_LOCK(sb);
+		} else {
+			TAILQ_INSERT_HEAD(&sb->sb_aiojobq, job, list);
+		}
+	} else {
+		aio_complete(job, cnt, error);
+		SOCKBUF_LOCK(sb);
+	}
+}
+
+static void
+soaio_process_sb(struct socket *so, struct sockbuf *sb)
+{
+	struct kaiocb *job;
+
+	SOCKBUF_LOCK(sb);
+	while (!TAILQ_EMPTY(&sb->sb_aiojobq) && soaio_ready(so, sb)) {
+		job = TAILQ_FIRST(&sb->sb_aiojobq);
+		TAILQ_REMOVE(&sb->sb_aiojobq, job, list);
+		if (!aio_clear_cancel_function(job))
+			continue;
+
+		soaio_process_job(so, sb, job);
+	}
+
+	/*
+	 * If there are still pending requests, the socket must not be
+	 * ready so set SB_AIO to request a wakeup when the socket
+	 * becomes ready.
+	 */
+	if (!TAILQ_EMPTY(&sb->sb_aiojobq))
+		sb->sb_flags |= SB_AIO;
+	sb->sb_flags &= ~SB_AIO_RUNNING;
+	SOCKBUF_UNLOCK(sb);
+
+	ACCEPT_LOCK();
+	SOCK_LOCK(so);
+	sorele(so);
+}
+
+void
+soaio_rcv(void *context, int pending)
+{
+	struct socket *so;
+
+	so = context;
+	soaio_process_sb(so, &so->so_rcv);
+}
+
+void
+soaio_snd(void *context, int pending)
+{
+	struct socket *so;
+
+	so = context;
+	soaio_process_sb(so, &so->so_snd);
+}
+
+void
+sowakeup_aio(struct socket *so, struct sockbuf *sb)
+{
+
+	SOCKBUF_LOCK_ASSERT(sb);
+	sb->sb_flags &= ~SB_AIO;
+	if (sb->sb_flags & SB_AIO_RUNNING)
+		return;
+	sb->sb_flags |= SB_AIO_RUNNING;
+	if (sb == &so->so_snd)
+		SOCK_LOCK(so);
+	soref(so);
+	if (sb == &so->so_snd)
+		SOCK_UNLOCK(so);
+	soaio_enqueue(&sb->sb_aiotask);
+}
+
+static void
+soo_aio_cancel(struct kaiocb *job)
+{
+	struct socket *so;
+	struct sockbuf *sb;
+	int opcode;
+
+	so = job->fd_file->f_data;
+	opcode = job->uaiocb.aio_lio_opcode;
+	if (opcode == LIO_READ)
+		sb = &so->so_rcv;
+	else {
+		MPASS(opcode == LIO_WRITE);
+		sb = &so->so_snd;
+	}
+
+	SOCKBUF_LOCK(sb);
+	if (!aio_cancel_cleared(job))
+		TAILQ_REMOVE(&sb->sb_aiojobq, job, list);
+	if (TAILQ_EMPTY(&sb->sb_aiojobq))
+		sb->sb_flags &= ~SB_AIO;
+	SOCKBUF_UNLOCK(sb);
+
+	aio_cancel(job);
+}
+
+static int
+soo_aio_queue(struct file *fp, struct kaiocb *job)
+{
+	struct socket *so;
+	struct sockbuf *sb;
+
+	so = fp->f_data;
+	switch (job->uaiocb.aio_lio_opcode) {
+	case LIO_READ:
+		sb = &so->so_rcv;
+		break;
+	case LIO_WRITE:
+		sb = &so->so_snd;
+		break;
+	default:
+		return (EINVAL);
+	}
+
+	SOCKBUF_LOCK(sb);
+	if (!aio_set_cancel_function(job, soo_aio_cancel))
+		panic("new job was cancelled");
+	TAILQ_INSERT_TAIL(&sb->sb_aiojobq, job, list);
+	if (!(sb->sb_flags & SB_AIO_RUNNING)) {
+		if (soaio_ready(so, sb))
+			sowakeup_aio(so, sb);
+		else
+			sb->sb_flags |= SB_AIO;
+	}
+	SOCKBUF_UNLOCK(sb);
+	return (0);
+}

Modified: head/sys/kern/uipc_debug.c
==============================================================================
--- head/sys/kern/uipc_debug.c	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/kern/uipc_debug.c	Tue Mar  1 18:12:14 2016	(r296277)
@@ -416,6 +416,9 @@ db_print_sockbuf(struct sockbuf *sb, con
 	db_printf("sb_flags: 0x%x (", sb->sb_flags);
 	db_print_sbflags(sb->sb_flags);
 	db_printf(")\n");
+
+	db_print_indent(indent);
+	db_printf("sb_aiojobq first: %p\n", TAILQ_FIRST(&sb->sb_aiojobq));
 }
 
 static void
@@ -470,7 +473,6 @@ db_print_socket(struct socket *so, const
 	db_print_indent(indent);
 	db_printf("so_sigio: %p   ", so->so_sigio);
 	db_printf("so_oobmark: %lu   ", so->so_oobmark);
-	db_printf("so_aiojobq first: %p\n", TAILQ_FIRST(&so->so_aiojobq));
 
 	db_print_sockbuf(&so->so_rcv, "so_rcv", indent);
 	db_print_sockbuf(&so->so_snd, "so_snd", indent);

Modified: head/sys/kern/uipc_sockbuf.c
==============================================================================
--- head/sys/kern/uipc_sockbuf.c	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/kern/uipc_sockbuf.c	Tue Mar  1 18:12:14 2016	(r296277)
@@ -332,7 +332,7 @@ sowakeup(struct socket *so, struct sockb
 	} else
 		ret = SU_OK;
 	if (sb->sb_flags & SB_AIO)
-		aio_swake(so, sb);
+		sowakeup_aio(so, sb);
 	SOCKBUF_UNLOCK(sb);
 	if (ret == SU_ISCONNECTED)
 		soisconnected(so);

Modified: head/sys/kern/uipc_socket.c
==============================================================================
--- head/sys/kern/uipc_socket.c	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/kern/uipc_socket.c	Tue Mar  1 18:12:14 2016	(r296277)
@@ -134,6 +134,7 @@ __FBSDID("$FreeBSD$");
 #include <sys/stat.h>
 #include <sys/sx.h>
 #include <sys/sysctl.h>
+#include <sys/taskqueue.h>
 #include <sys/uio.h>
 #include <sys/jail.h>
 #include <sys/syslog.h>
@@ -396,7 +397,10 @@ soalloc(struct vnet *vnet)
 	SOCKBUF_LOCK_INIT(&so->so_rcv, "so_rcv");
 	sx_init(&so->so_snd.sb_sx, "so_snd_sx");
 	sx_init(&so->so_rcv.sb_sx, "so_rcv_sx");
-	TAILQ_INIT(&so->so_aiojobq);
+	TAILQ_INIT(&so->so_snd.sb_aiojobq);
+	TAILQ_INIT(&so->so_rcv.sb_aiojobq);
+	TASK_INIT(&so->so_snd.sb_aiotask, 0, soaio_snd, so);
+	TASK_INIT(&so->so_rcv.sb_aiotask, 0, soaio_rcv, so);
 #ifdef VIMAGE
 	VNET_ASSERT(vnet != NULL, ("%s:%d vnet is NULL, so=%p",
 	    __func__, __LINE__, so));

Modified: head/sys/kern/vfs_aio.c
==============================================================================
--- head/sys/kern/vfs_aio.c	Tue Mar  1 18:05:11 2016	(r296276)
+++ head/sys/kern/vfs_aio.c	Tue Mar  1 18:12:14 2016	(r296277)
@@ -72,8 +72,6 @@ __FBSDID("$FreeBSD$");
 #include <vm/uma.h>
 #include <sys/aio.h>
 
-#include "opt_vfs_aio.h"
-
 /*
  * Counter for allocating reference ids to new jobs.  Wrapped to 1 on
  * overflow. (XXX will be removed soon.)
@@ -85,14 +83,6 @@ static u_long jobrefid;
  */
 static uint64_t jobseqno;
 
-#define JOBST_NULL		0
-#define JOBST_JOBQSOCK		1
-#define JOBST_JOBQGLOBAL	2
-#define JOBST_JOBRUNNING	3
-#define JOBST_JOBFINISHED	4
-#define JOBST_JOBQBUF		5
-#define JOBST_JOBQSYNC		6
-
 #ifndef MAX_AIO_PER_PROC
 #define MAX_AIO_PER_PROC	32
 #endif
@@ -101,26 +91,14 @@ static uint64_t jobseqno;
 #define MAX_AIO_QUEUE_PER_PROC	256 /* Bigger than AIO_LISTIO_MAX */
 #endif
 
-#ifndef MAX_AIO_PROCS
-#define MAX_AIO_PROCS		32
-#endif
-
 #ifndef MAX_AIO_QUEUE
 #define	MAX_AIO_QUEUE		1024 /* Bigger than AIO_LISTIO_MAX */
 #endif
 
-#ifndef TARGET_AIO_PROCS
-#define TARGET_AIO_PROCS	4
-#endif
-
 #ifndef MAX_BUF_AIO
 #define MAX_BUF_AIO		16
 #endif
 
-#ifndef AIOD_LIFETIME_DEFAULT
-#define AIOD_LIFETIME_DEFAULT	(30 * hz)
-#endif
-
 FEATURE(aio, "Asynchronous I/O");
 
 static MALLOC_DEFINE(M_LIO, "lio", "listio aio control block list");
@@ -128,6 +106,10 @@ static MALLOC_DEFINE(M_LIO, "lio", "list
 static SYSCTL_NODE(_vfs, OID_AUTO, aio, CTLFLAG_RW, 0,
     "Async IO management");
 
+static int enable_aio_unsafe = 0;
+SYSCTL_INT(_vfs_aio, OID_AUTO, enable_unsafe, CTLFLAG_RW, &enable_aio_unsafe, 0,
+    "Permit asynchronous IO on all file types, not just known-safe types");
+
 static int max_aio_procs = MAX_AIO_PROCS;
 SYSCTL_INT(_vfs_aio, OID_AUTO, max_aio_procs, CTLFLAG_RW, &max_aio_procs, 0,
     "Maximum number of kernel processes to use for handling async IO ");
@@ -165,11 +147,6 @@ static int aiod_lifetime;
 SYSCTL_INT(_vfs_aio, OID_AUTO, aiod_lifetime, CTLFLAG_RW, &aiod_lifetime, 0,
     "Maximum lifetime for idle aiod");
 
-static int unloadable = 0;
-SYSCTL_INT(_vfs_aio, OID_AUTO, unloadable, CTLFLAG_RW, &unloadable, 0,
-    "Allow unload of aio (not recommended)");
-
-
 static int max_aio_per_proc = MAX_AIO_PER_PROC;
 SYSCTL_INT(_vfs_aio, OID_AUTO, max_aio_per_proc, CTLFLAG_RW, &max_aio_per_proc,
     0,
@@ -208,46 +185,27 @@ typedef struct oaiocb {
  */
 
 /*
- * Current, there is only two backends: BIO and generic file I/O.
- * socket I/O is served by generic file I/O, this is not a good idea, since
- * disk file I/O and any other types without O_NONBLOCK flag can block daemon
- * processes, if there is no thread to serve socket I/O, the socket I/O will be
- * delayed too long or starved, we should create some processes dedicated to
- * sockets to do non-blocking I/O, same for pipe and fifo, for these I/O
- * systems we really need non-blocking interface, fiddling O_NONBLOCK in file
- * structure is not safe because there is race between userland and aio
- * daemons.
- */
-
-struct kaiocb {
-	TAILQ_ENTRY(kaiocb) list;	/* (b) internal list of for backend */
-	TAILQ_ENTRY(kaiocb) plist;	/* (a) list of jobs for each backend */
-	TAILQ_ENTRY(kaiocb) allist;	/* (a) list of all jobs in proc */
-	int	jobflags;		/* (a) job flags */
-	int	jobstate;		/* (b) job state */
-	int	inputcharge;		/* (*) input blockes */
-	int	outputcharge;		/* (*) output blockes */
-	struct	bio *bp;		/* (*) BIO backend BIO pointer */
-	struct	buf *pbuf;		/* (*) BIO backend buffer pointer */
-	struct	vm_page *pages[btoc(MAXPHYS)+1]; /* BIO backend pages */
-	int	npages;			/* BIO backend number of pages */
-	struct	proc *userproc;		/* (*) user process */
-	struct	ucred *cred;		/* (*) active credential when created */
-	struct	file *fd_file;		/* (*) pointer to file structure */
-	struct	aioliojob *lio;		/* (*) optional lio job */
-	struct	aiocb *ujob;		/* (*) pointer in userspace of aiocb */
-	struct	knlist klist;		/* (a) list of knotes */
-	struct	aiocb uaiocb;		/* (*) kernel I/O control block */
-	ksiginfo_t ksi;			/* (a) realtime signal info */
-	uint64_t seqno;			/* (*) job number */
-	int	pending;		/* (a) number of pending I/O, aio_fsync only */
-};
+ * If the routine that services an AIO request blocks while running in an
+ * AIO kernel process it can starve other I/O requests.  BIO requests
+ * queued via aio_qphysio() complete in GEOM and do not use AIO kernel
+ * processes at all.  Socket I/O requests use a separate pool of
+ * kprocs and also force non-blocking I/O.  Other file I/O requests
+ * use the generic fo_read/fo_write operations which can block.  The
+ * fsync and mlock operations can also block while executing.  Ideally
+ * none of these requests would block while executing.
+ *
+ * Note that the service routines cannot toggle O_NONBLOCK in the file
+ * structure directly while handling a request due to races with
+ * userland threads.
+ */
 
 /* jobflags */
-#define	KAIOCB_DONE		0x01
-#define	KAIOCB_BUFDONE		0x02
-#define	KAIOCB_RUNDOWN		0x04
+#define	KAIOCB_QUEUEING		0x01
+#define	KAIOCB_CANCELLED	0x02
+#define	KAIOCB_CANCELLING	0x04
 #define	KAIOCB_CHECKSYNC	0x08
+#define	KAIOCB_CLEARED		0x10
+#define	KAIOCB_FINISHED		0x20
 
 /*
  * AIO process info
@@ -293,9 +251,10 @@ struct kaioinfo {
 	TAILQ_HEAD(,kaiocb) kaio_done;	/* (a) done queue for process */
 	TAILQ_HEAD(,aioliojob) kaio_liojoblist; /* (a) list of lio jobs */
 	TAILQ_HEAD(,kaiocb) kaio_jobqueue;	/* (a) job queue for process */
-	TAILQ_HEAD(,kaiocb) kaio_bufqueue;	/* (a) buffer job queue */
 	TAILQ_HEAD(,kaiocb) kaio_syncqueue;	/* (a) queue for aio_fsync */
+	TAILQ_HEAD(,kaiocb) kaio_syncready;  /* (a) second q for aio_fsync */
 	struct	task kaio_task;		/* (*) task to kick aio processes */
+	struct	task kaio_sync_task;	/* (*) task to schedule fsync jobs */
 };
 
 #define AIO_LOCK(ki)		mtx_lock(&(ki)->kaio_mtx)
@@ -332,21 +291,18 @@ static int	aio_free_entry(struct kaiocb 
 static void	aio_process_rw(struct kaiocb *job);
 static void	aio_process_sync(struct kaiocb *job);
 static void	aio_process_mlock(struct kaiocb *job);
+static void	aio_schedule_fsync(void *context, int pending);
 static int	aio_newproc(int *);
 int		aio_aqueue(struct thread *td, struct aiocb *ujob,
 		    struct aioliojob *lio, int type, struct aiocb_ops *ops);
+static int	aio_queue_file(struct file *fp, struct kaiocb *job);
 static void	aio_physwakeup(struct bio *bp);
 static void	aio_proc_rundown(void *arg, struct proc *p);
 static void	aio_proc_rundown_exec(void *arg, struct proc *p,
 		    struct image_params *imgp);
 static int	aio_qphysio(struct proc *p, struct kaiocb *job);
 static void	aio_daemon(void *param);
-static void	aio_swake_cb(struct socket *, struct sockbuf *);
-static int	aio_unload(void);
-static void	aio_bio_done_notify(struct proc *userp, struct kaiocb *job,
-		    int type);
-#define DONE_BUF	1
-#define DONE_QUEUE	2
+static void	aio_bio_done_notify(struct proc *userp, struct kaiocb *job);
 static int	aio_kick(struct proc *userp);
 static void	aio_kick_nowait(struct proc *userp);
 static void	aio_kick_helper(void *context, int pending);
@@ -397,13 +353,10 @@ aio_modload(struct module *module, int c
 	case MOD_LOAD:
 		aio_onceonly();
 		break;
-	case MOD_UNLOAD:
-		error = aio_unload();
-		break;
 	case MOD_SHUTDOWN:
 		break;
 	default:
-		error = EINVAL;
+		error = EOPNOTSUPP;
 		break;
 	}
 	return (error);
@@ -471,8 +424,6 @@ aio_onceonly(void)
 {
 	int error;
 
-	/* XXX: should probably just use so->callback */
-	aio_swake = &aio_swake_cb;
 	exit_tag = EVENTHANDLER_REGISTER(process_exit, aio_proc_rundown, NULL,
 	    EVENTHANDLER_PRI_ANY);
 	exec_tag = EVENTHANDLER_REGISTER(process_exec, aio_proc_rundown_exec,
@@ -513,55 +464,6 @@ aio_onceonly(void)
 }
 
 /*
- * Callback for unload of AIO when used as a module.
- */
-static int
-aio_unload(void)
-{
-	int error;
-
-	/*
-	 * XXX: no unloads by default, it's too dangerous.
-	 * perhaps we could do it if locked out callers and then
-	 * did an aio_proc_rundown() on each process.
-	 *
-	 * jhb: aio_proc_rundown() needs to run on curproc though,
-	 * so I don't think that would fly.
-	 */
-	if (!unloadable)
-		return (EOPNOTSUPP);
-
-#ifdef COMPAT_FREEBSD32
-	syscall32_helper_unregister(aio32_syscalls);
-#endif
-	syscall_helper_unregister(aio_syscalls);
-
-	error = kqueue_del_filteropts(EVFILT_AIO);
-	if (error)
-		return error;
-	error = kqueue_del_filteropts(EVFILT_LIO);
-	if (error)
-		return error;
-	async_io_version = 0;
-	aio_swake = NULL;
-	taskqueue_free(taskqueue_aiod_kick);
-	delete_unrhdr(aiod_unr);
-	uma_zdestroy(kaio_zone);
-	uma_zdestroy(aiop_zone);
-	uma_zdestroy(aiocb_zone);
-	uma_zdestroy(aiol_zone);
-	uma_zdestroy(aiolio_zone);
-	EVENTHANDLER_DEREGISTER(process_exit, exit_tag);
-	EVENTHANDLER_DEREGISTER(process_exec, exec_tag);
-	mtx_destroy(&aio_job_mtx);
-	sema_destroy(&aio_newproc_sem);
-	p31b_setcfg(CTL_P1003_1B_AIO_LISTIO_MAX, -1);
-	p31b_setcfg(CTL_P1003_1B_AIO_MAX, -1);
-	p31b_setcfg(CTL_P1003_1B_AIO_PRIO_DELTA_MAX, -1);
-	return (0);
-}
-
-/*
  * Init the per-process aioinfo structure.  The aioinfo limits are set
  * per-process for user limit (resource) management.
  */
@@ -582,10 +484,11 @@ aio_init_aioinfo(struct proc *p)
 	TAILQ_INIT(&ki->kaio_all);
 	TAILQ_INIT(&ki->kaio_done);
 	TAILQ_INIT(&ki->kaio_jobqueue);
-	TAILQ_INIT(&ki->kaio_bufqueue);
 	TAILQ_INIT(&ki->kaio_liojoblist);
 	TAILQ_INIT(&ki->kaio_syncqueue);
+	TAILQ_INIT(&ki->kaio_syncready);
 	TASK_INIT(&ki->kaio_task, 0, aio_kick_helper, p);
+	TASK_INIT(&ki->kaio_sync_task, 0, aio_schedule_fsync, ki);
 	PROC_LOCK(p);
 	if (p->p_aioinfo == NULL) {
 		p->p_aioinfo = ki;
@@ -637,7 +540,7 @@ aio_free_entry(struct kaiocb *job)
 	MPASS(ki != NULL);
 
 	AIO_LOCK_ASSERT(ki, MA_OWNED);
-	MPASS(job->jobstate == JOBST_JOBFINISHED);
+	MPASS(job->jobflags & KAIOCB_FINISHED);
 
 	atomic_subtract_int(&num_queue_count, 1);
 
@@ -670,7 +573,6 @@ aio_free_entry(struct kaiocb *job)
 	PROC_UNLOCK(p);
 
 	MPASS(job->bp == NULL);
-	job->jobstate = JOBST_NULL;
 	AIO_UNLOCK(ki);
 
 	/*
@@ -709,6 +611,57 @@ aio_proc_rundown_exec(void *arg, struct 
    	aio_proc_rundown(arg, p);
 }
 
+static int
+aio_cancel_job(struct proc *p, struct kaioinfo *ki, struct kaiocb *job)
+{
+	aio_cancel_fn_t *func;
+	int cancelled;
+
+	AIO_LOCK_ASSERT(ki, MA_OWNED);
+	if (job->jobflags & (KAIOCB_CANCELLED | KAIOCB_FINISHED))
+		return (0);
+	MPASS((job->jobflags & KAIOCB_CANCELLING) == 0);
+	job->jobflags |= KAIOCB_CANCELLED;
+
+	func = job->cancel_fn;
+
+	/*
+	 * If there is no cancel routine, just leave the job marked as
+	 * cancelled.  The job should be in active use by a caller who
+	 * should complete it normally or when it fails to install a
+	 * cancel routine.
+	 */
+	if (func == NULL)
+		return (0);
+
+	/*
+	 * Set the CANCELLING flag so that aio_complete() will defer
+	 * completions of this job.  This prevents the job from being
+	 * freed out from under the cancel callback.  After the
+	 * callback any deferred completion (whether from the callback
+	 * or any other source) will be completed.
+	 */
+	job->jobflags |= KAIOCB_CANCELLING;
+	AIO_UNLOCK(ki);

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201603011812.u21ICEqt071147>