Date: Sun, 6 Jun 1999 14:20:04 GMT From: cmsedore@maxwell.syr.edu To: FreeBSD-gnats-submit@freebsd.org Subject: kern/12053: patches for aio to improve socket io and add aio_waitcomplete() Message-ID: <199906061420.OAA13268@static.maxwell.syr.edu>
next in thread | raw e-mail | index | archive | help
>Number: 12053 >Category: kern >Synopsis: fixes a few aio bugs, makes socket io better, adds aio_waitcomplete >Confidential: no >Severity: non-critical >Priority: low >Responsible: freebsd-bugs >State: open >Quarter: >Keywords: >Date-Required: >Class: change-request >Submitter-Id: current-users >Arrival-Date: Sun Jun 6 11:20:00 PDT 1999 >Closed-Date: >Last-Modified: >Originator: Christopher M Sedore >Release: FreeBSD 4.0-19990503-CURRENT i386 >Organization: >Environment: Patches should apply cleanly to -current. >Description: These patches fix a few minor functional bugs, though they also fix at least one race condition that would allow DOS (aio_process doesn't check to make sure the file descriptor hasn't been closed). In addition to the fixes, the patches change the way that aio on sockets is done. Rather than blocking an aiod in socket routines, this changes the way that queueing works so that an aiod is awakened when the socket is readable (sowakeup and struct socket are modified). Without this patch, the aio routines are not too useful for sockets since queueing 32 reads on sockets will block all the aio routines until one completes, effectively stalling all other pending requests. Some parameter changes were made to increase the number of aio requests allowed to be queued, and also to set the minimum number of aiods to be 4. Aiod's (up to the minimum number) are created the first time that a process using aio is run. It is necessary to have them running since the socket io routines cannot start up new processes. The syscall aio_waitcomplete was added (this is not part of any standard, it is of my own creation). This system call fills what I believe to be a void in aio routines, namely a way to hand off an aio request to the system, and then have the system hand it back. This eliminates the overhead of having to track all outstanding aio requests, and is MUCH more efficient that aio_suspend since you don't have to check out all the individual requests. This code has been tested fairly extensively on SMP and non-SMP kernels without incident. Performance-wise, it works very well and has little difficulty pushing 3-4 MB/sec. Simple benchmarking of aio_read()/aio_write paired with aio_waitcomplete() vs select() shows that aio routines perform better than select() when the number of file descriptors exceed about 38. In real-world applications, the advantage lies more in the ability to use async io to avoid lags in servicing sockets when doing disk io. >How-To-Repeat: You can expose some of the aio problems by opening a file, queueing a bunch of aio requests on the opened descriptor, and closing the file before the requests have a chance to complete. This should result in a panic. >Fix: *** /mnt/sup/current/src/sys/sys/aio.h Sun Jan 17 22:33:08 1999 --- sys/aio.h Mon May 3 20:36:47 1999 *************** *** 143,150 **** --- 143,174 ---- __END_DECLS #else + /* + * Job queue item + */ + + #define AIOCBLIST_CANCELLED 0x1 + #define AIOCBLIST_RUNDOWN 0x4 + #define AIOCBLIST_ASYNCFREE 0x8 + #define AIOCBLIST_DONE 0x10 + + struct aiocblist { + TAILQ_ENTRY (aiocblist) list; /* List of jobs */ + TAILQ_ENTRY (aiocblist) plist; /* List of jobs for proc */ + int jobflags; + int jobstate; + int inputcharge, outputcharge; + struct buf *bp; /* buffer pointer */ + struct proc *userproc; /* User process */ + struct aioproclist *jobaioproc; /* AIO process descriptor */ + struct aio_liojob *lio; /* optional lio job */ + struct aiocb *uuaiocb; /* pointer in userspace of aiocb */ + struct aiocb uaiocb; /* Kernel I/O control block */ + }; void aio_proc_rundown(struct proc *p); + + void aio_swake(struct socket *,struct sockbuf *); + int aio_waitcomplete(struct aiocb **,struct timespec *); #endif *** /mnt/sup/current/src/sys/sys/socketvar.h Sun Apr 4 21:41:28 1999 --- sys/socketvar.h Fri May 7 19:48:41 1999 *************** *** 80,85 **** --- 80,86 ---- struct sigio *so_sigio; /* information for async I/O or out of band data (SIGURG) */ u_long so_oobmark; /* chars to oob mark */ + TAILQ_HEAD(, aiocblist) so_aiojobq; /* AIO ops waiting on socket */ /* * Variables for socket buffering. */ *************** *** 102,107 **** --- 103,109 ---- #define SB_ASYNC 0x10 /* ASYNC I/O, need signals */ #define SB_UPCALL 0x20 /* someone wants an upcall */ #define SB_NOINTR 0x40 /* operations not interruptible */ + #define SB_AIO 0x80 /* AIO operations queued */ void (*so_upcall) __P((struct socket *, void *, int)); void *so_upcallarg; *************** *** 169,175 **** /* * Do we need to notify the other side when I/O is possible? */ ! #define sb_notify(sb) (((sb)->sb_flags & (SB_WAIT|SB_SEL|SB_ASYNC|SB_UPCALL)) != 0) /* * How much space is there in a socket buffer (so->so_snd or so->so_rcv)? --- 171,177 ---- /* * Do we need to notify the other side when I/O is possible? */ ! #define sb_notify(sb) (((sb)->sb_flags & (SB_WAIT|SB_SEL|SB_ASYNC|SB_UPCALL|SB_AIO)) != 0) /* * How much space is there in a socket buffer (so->so_snd or so->so_rcv)? *** /mnt/sup/current/src/sys/kern/syscalls.master Wed Apr 28 11:28:49 1999 --- kern/syscalls.master Mon May 3 20:42:14 1999 *************** *** 474,476 **** --- 474,477 ---- struct sf_hdtr *hdtr, off_t *sbytes, int flags); } 337 STD BSD { int kldsym(int fileid, int cmd, void *data); } 338 STD BSD { int jail(struct jail *jail); } + 339 STD BSD { int aio_waitcomplete(struct aiocb **aiocbp, struct timespec *timeout); } *** /mnt/sup/current/src/sys/kern/uipc_socket.c Fri May 7 23:15:45 1999 --- kern/uipc_socket.c Fri May 7 19:47:39 1999 *************** *** 93,98 **** --- 93,99 ---- bzero(so, sizeof *so); so->so_gencnt = ++so_gencnt; so->so_zone = socket_zone; + TAILQ_INIT(&so->so_aiojobq); } return so; } *** /mnt/sup/current/src/sys/kern/uipc_socket2.c Fri May 7 23:15:45 1999 --- kern/uipc_socket2.c Mon May 3 20:20:07 1999 *************** *** 47,52 **** --- 47,53 ---- #include <sys/socketvar.h> #include <sys/signalvar.h> #include <sys/sysctl.h> + #include <sys/aio.h> /* for aio_swake proto */ /* * Primitive routines for operating on sockets and socket buffers *************** *** 322,327 **** --- 323,331 ---- pgsigio(so->so_sigio, SIGIO, 0); if (sb->sb_flags & SB_UPCALL) (*so->so_upcall)(so, so->so_upcallarg, M_DONTWAIT); + if(sb->sb_flags & SB_AIO) { + aio_swake(so,sb); + } } /* *** /mnt/sup/current/src/sys/kern/vfs_aio.c Fri May 7 23:15:46 1999 --- kern/vfs_aio.c Fri May 7 20:01:41 1999 *************** *** 33,38 **** --- 36,43 ---- #include <sys/proc.h> #include <sys/resourcevar.h> #include <sys/signalvar.h> + #include <sys/protosw.h> + #include <sys/socketvar.h> #include <sys/sysctl.h> #include <sys/vnode.h> #include <sys/conf.h> *************** *** 77,83 **** #endif #ifndef TARGET_AIO_PROCS ! #define TARGET_AIO_PROCS 0 #endif #ifndef MAX_BUF_AIO --- 82,88 ---- #endif #ifndef TARGET_AIO_PROCS ! #define TARGET_AIO_PROCS 4 #endif #ifndef MAX_BUF_AIO *************** *** 144,173 **** /* - * Job queue item - */ - - #define AIOCBLIST_CANCELLED 0x1 - #define AIOCBLIST_RUNDOWN 0x4 - #define AIOCBLIST_ASYNCFREE 0x8 - #define AIOCBLIST_DONE 0x10 - - struct aiocblist { - TAILQ_ENTRY (aiocblist) list; /* List of jobs */ - TAILQ_ENTRY (aiocblist) plist; /* List of jobs for proc */ - int jobflags; - int jobstate; - int inputcharge, outputcharge; - struct buf *bp; /* buffer pointer */ - struct proc *userproc; /* User process */ - struct aioproclist *jobaioproc; /* AIO process descriptor */ - struct aio_liojob *lio; /* optional lio job */ - struct aiocb *uuaiocb; /* pointer in userspace of aiocb */ - struct aiocb uaiocb; /* Kernel I/O control block */ - }; - - - /* * AIO process info */ #define AIOP_FREE 0x1 /* proc on free queue */ --- 149,154 ---- *************** *** 215,220 **** --- 196,202 ---- TAILQ_HEAD (,aiocblist) kaio_jobdone; /* done queue for process */ TAILQ_HEAD (,aiocblist) kaio_bufqueue; /* buffer job queue for process */ TAILQ_HEAD (,aiocblist) kaio_bufdone; /* buffer done queue for process */ + TAILQ_HEAD (,aiocblist) kaio_sockqueue; /* queue for aios waiting on sockets */ }; #define KAIO_RUNDOWN 0x1 /* process is being run down */ *************** *** 290,296 **** --- 272,282 ---- TAILQ_INIT(&ki->kaio_bufdone); TAILQ_INIT(&ki->kaio_bufqueue); TAILQ_INIT(&ki->kaio_liojoblist); + TAILQ_INIT(&ki->kaio_sockqueue); } + + while (num_aio_procs<target_aio_procs) + aio_newproc(); } /* *************** *** 406,412 **** struct kaioinfo *ki; struct aio_liojob *lj, *ljn; struct aiocblist *aiocbe, *aiocbn; ! ki = p->p_aioinfo; if (ki == NULL) return; --- 392,401 ---- struct kaioinfo *ki; struct aio_liojob *lj, *ljn; struct aiocblist *aiocbe, *aiocbn; ! struct file *fp; ! struct filedesc *fdp; ! struct socket *so; ! ki = p->p_aioinfo; if (ki == NULL) return; *************** *** 419,424 **** --- 408,442 ---- break; } + /* + * We move any aio ops that are waiting on socket io to the normal job + * queues so they are cleaned up with any others. + */ + + fdp=p->p_fd; + + s = splnet(); + for ( aiocbe = TAILQ_FIRST(&ki->kaio_sockqueue); + aiocbe; + aiocbe = aiocbn) { + aiocbn = TAILQ_NEXT(aiocbe, plist); + fp=fdp->fd_ofiles[aiocbe->uaiocb.aio_fildes]; + if (fp) { + so=(struct socket *)fp->f_data; + TAILQ_REMOVE(&so->so_aiojobq,aiocbe,list); + if (TAILQ_EMPTY(&so->so_aiojobq)) { + so->so_snd.sb_flags&=~SB_AIO; + so->so_rcv.sb_flags&=~SB_AIO; + } + } + TAILQ_REMOVE(&ki->kaio_sockqueue,aiocbe,plist); + TAILQ_INSERT_HEAD(&aio_jobs,aiocbe,list); + TAILQ_INSERT_HEAD(&ki->kaio_jobqueue,aiocbe,plist); + } + splx(s); + + + restart1: for ( aiocbe = TAILQ_FIRST(&ki->kaio_jobdone); aiocbe; *************** *** 491,505 **** static struct aiocblist * aio_selectjob(struct aioproclist *aiop) { ! struct aiocblist *aiocbe; aiocbe = TAILQ_FIRST(&aiop->jobtorun); if (aiocbe) { TAILQ_REMOVE(&aiop->jobtorun, aiocbe, list); return aiocbe; ! } ! for (aiocbe = TAILQ_FIRST(&aio_jobs); aiocbe; aiocbe = TAILQ_NEXT(aiocbe, list)) { --- 509,524 ---- static struct aiocblist * aio_selectjob(struct aioproclist *aiop) { ! int s; struct aiocblist *aiocbe; aiocbe = TAILQ_FIRST(&aiop->jobtorun); if (aiocbe) { TAILQ_REMOVE(&aiop->jobtorun, aiocbe, list); return aiocbe; ! } ! ! s=splnet(); for (aiocbe = TAILQ_FIRST(&aio_jobs); aiocbe; aiocbe = TAILQ_NEXT(aiocbe, list)) { *************** *** 511,519 **** --- 530,540 ---- if (ki->kaio_active_count < ki->kaio_maxactive_count) { TAILQ_REMOVE(&aio_jobs, aiocbe, list); + splx(s); return aiocbe; } } + splx(s); return NULL; } *************** *** 550,555 **** --- 571,582 ---- fd = cb->aio_fildes; fp = fdp->fd_ofiles[fd]; + if (fp==NULL) { + cb->_aiocb_private.error=EBADF; + cb->_aiocb_private.status=-1; + return; + } + aiov.iov_base = (void *) cb->aio_buf; aiov.iov_len = cb->aio_nbytes; *************** *** 588,593 **** --- 615,621 ---- cnt -= auio.uio_resid; cb->_aiocb_private.error = error; cb->_aiocb_private.status = cnt; + return; *************** *** 625,630 **** --- 653,660 ---- aiop->aioprocflags |= AIOP_FREE; TAILQ_INIT(&aiop->jobtorun); + s=splnet(); + /* * Place thread (lightweight process) onto the AIO free thread list */ *************** *** 632,637 **** --- 662,669 ---- wakeup(&aio_freeproc); TAILQ_INSERT_HEAD(&aio_freeproc, aiop, list); + splx(s); + /* * Make up a name for the daemon */ *************** *** 679,687 **** --- 711,721 ---- * Take daemon off of free queue */ if (aiop->aioprocflags & AIOP_FREE) { + s=splnet(); TAILQ_REMOVE(&aio_freeproc, aiop, list); TAILQ_INSERT_TAIL(&aio_activeproc, aiop, list); aiop->aioprocflags &= ~AIOP_FREE; + splx(s); } aiop->aioprocflags &= ~AIOP_SCHED; *************** *** 790,795 **** --- 824,831 ---- * the just finished I/O request into the done queue for the * associated client. */ + + s=splnet(); if (aiocbe->jobflags & AIOCBLIST_ASYNCFREE) { aiocbe->jobflags &= ~AIOCBLIST_ASYNCFREE; TAILQ_INSERT_HEAD(&aio_freejobs, aiocbe, list); *************** *** 799,804 **** --- 835,841 ---- TAILQ_INSERT_TAIL(&ki->kaio_jobdone, aiocbe, plist); } + splx(s); if (aiocbe->jobflags & AIOCBLIST_RUNDOWN) { wakeup(aiocbe); *************** *** 848,858 **** --- 885,897 ---- * If we are the first to be put onto the free queue, wakeup * anyone waiting for a daemon. */ + s=splnet(); TAILQ_REMOVE(&aio_activeproc, aiop, list); if (TAILQ_EMPTY(&aio_freeproc)) wakeup(&aio_freeproc); TAILQ_INSERT_HEAD(&aio_freeproc, aiop, list); aiop->aioprocflags |= AIOP_FREE; + splx(s); /* * If daemon is inactive for a long time, allow it to exit, thereby *************** *** 860,880 **** */ if (((aiop->aioprocflags & AIOP_SCHED) == 0) && tsleep(mycp, PRIBIO, "aiordy", aiod_lifetime)) { if ((TAILQ_FIRST(&aio_jobs) == NULL) && (TAILQ_FIRST(&aiop->jobtorun) == NULL)) { if ((aiop->aioprocflags & AIOP_FREE) && (num_aio_procs > target_aio_procs)) { TAILQ_REMOVE(&aio_freeproc, aiop, list); zfree(aiop_zone, aiop); num_aio_procs--; #if defined(DIAGNOSTIC) if (mycp->p_vmspace->vm_refcnt <= 1) printf("AIOD: bad vm refcnt for exiting daemon: %d\n", mycp->p_vmspace->vm_refcnt); ! #endif exit1(mycp, 0); } } } } } --- 899,922 ---- */ if (((aiop->aioprocflags & AIOP_SCHED) == 0) && tsleep(mycp, PRIBIO, "aiordy", aiod_lifetime)) { + s=splnet(); if ((TAILQ_FIRST(&aio_jobs) == NULL) && (TAILQ_FIRST(&aiop->jobtorun) == NULL)) { if ((aiop->aioprocflags & AIOP_FREE) && (num_aio_procs > target_aio_procs)) { TAILQ_REMOVE(&aio_freeproc, aiop, list); + splx(s); zfree(aiop_zone, aiop); num_aio_procs--; #if defined(DIAGNOSTIC) if (mycp->p_vmspace->vm_refcnt <= 1) printf("AIOD: bad vm refcnt for exiting daemon: %d\n", mycp->p_vmspace->vm_refcnt); ! #endif exit1(mycp, 0); } } + splx(s); } } } *************** *** 1141,1146 **** --- 1183,1232 ---- return (error); } + void + aio_swake(struct socket *so,struct sockbuf *sb) + { + struct aiocblist *cb,*cbn; + struct proc *p; + struct kaioinfo *ki=NULL; + int opcode,wakecount=0; + struct aioproclist *aiop; + + if (sb==&so->so_snd) { + opcode=LIO_WRITE; + so->so_snd.sb_flags&=~SB_AIO; + } else { + opcode=LIO_READ; + so->so_rcv.sb_flags&=~SB_AIO; + } + + for (cb=TAILQ_FIRST(&so->so_aiojobq);cb;cb=cbn) { + cbn=TAILQ_NEXT(cb,list); + if (opcode==cb->uaiocb.aio_lio_opcode) { + p=cb->userproc; + ki=p->p_aioinfo; + TAILQ_REMOVE(&so->so_aiojobq,cb,list); + TAILQ_REMOVE(&ki->kaio_sockqueue,cb,plist); + TAILQ_INSERT_TAIL(&aio_jobs,cb,list); + TAILQ_INSERT_TAIL(&ki->kaio_jobqueue,cb,plist); + wakecount++; + if (cb->jobstate!=JOBST_JOBQGLOBAL) + panic("invalid queue value"); + } + } + + while (wakecount--) { + if ((aiop = TAILQ_FIRST(&aio_freeproc)) != 0) { + TAILQ_REMOVE(&aio_freeproc, aiop, list); + TAILQ_INSERT_TAIL(&aio_activeproc, aiop, list); + aiop->aioprocflags &= ~AIOP_FREE; + wakeup(aiop->aioproc); + } + } + + } + + /* * Queue a new AIO request. Choosing either the threaded or direct physio * VCHR technique is done in this code. *************** *** 1151,1156 **** --- 1237,1244 ---- struct filedesc *fdp; struct file *fp; unsigned int fd; + struct socket *so; + int s; int error; int opcode; *************** *** 1269,1274 **** --- 1357,1392 ---- aiocbe->lio = lj; ki = p->p_aioinfo; + if (fp->f_type==DTYPE_SOCKET) { + + /* + * Alternate queueing for socket ops: We reach down into the descriptor + * to get the socket data. We then check to see if the socket is ready + * to be read or written (based on the requested operation). + * + * If it is not ready for io, then queue the aiocbe on the socket, + * and set the flags so we get a call when sbnotify() happens. + */ + so=(struct socket *)fp->f_data; + s=splnet(); + if (((opcode==LIO_READ) && (!soreadable(so))) || + ((opcode==LIO_WRITE) && (!sowriteable(so)))) { + TAILQ_INSERT_TAIL(&so->so_aiojobq,aiocbe,list); + TAILQ_INSERT_TAIL(&ki->kaio_sockqueue,aiocbe,plist); + if (opcode==LIO_READ) { + so->so_rcv.sb_flags|=SB_AIO; + } else { + so->so_snd.sb_flags|=SB_AIO; + } + aiocbe->jobstate = JOBST_JOBQGLOBAL; /* XXX */ + ki->kaio_queue_count++; + num_queue_count++; + splx(s); + return 0; + } + splx(s); + } + if ((error = aio_qphysio(p, aiocbe)) == 0) { return 0; } else if (error > 0) { *************** *** 1287,1294 **** --- 1405,1414 ---- if (lj) { lj->lioj_queue_count++; } + s=splnet(); TAILQ_INSERT_TAIL(&ki->kaio_jobqueue, aiocbe, plist); TAILQ_INSERT_TAIL(&aio_jobs, aiocbe, list); + splx(s); aiocbe->jobstate = JOBST_JOBQGLOBAL; num_queue_count++; *************** *** 1303,1308 **** --- 1423,1429 ---- * correct thing to do. */ retryproc: + s=splnet(); if ((aiop = TAILQ_FIRST(&aio_freeproc)) != NULL) { TAILQ_REMOVE(&aio_freeproc, aiop, list); TAILQ_INSERT_TAIL(&aio_activeproc, aiop, list); *************** *** 1319,1324 **** --- 1440,1446 ---- } num_aio_resv_start--; } + splx(s); return error; } *************** *** 1367,1377 **** jobref = fuword(&ujob->_aiocb_private.kernelinfo); if (jobref == -1 || jobref == 0) return EINVAL; ! for (cb = TAILQ_FIRST(&ki->kaio_jobdone); cb; cb = TAILQ_NEXT(cb, plist)) { if (((intptr_t) cb->uaiocb._aiocb_private.kernelinfo) == jobref) { if (ujob == cb->uuaiocb) { p->p_retval[0] = cb->uaiocb._aiocb_private.status; } else { --- 1489,1501 ---- jobref = fuword(&ujob->_aiocb_private.kernelinfo); if (jobref == -1 || jobref == 0) return EINVAL; ! ! s=splnet(); for (cb = TAILQ_FIRST(&ki->kaio_jobdone); cb; cb = TAILQ_NEXT(cb, plist)) { if (((intptr_t) cb->uaiocb._aiocb_private.kernelinfo) == jobref) { + splx(s); if (ujob == cb->uuaiocb) { p->p_retval[0] = cb->uaiocb._aiocb_private.status; } else { *************** *** 1388,1394 **** return 0; } } ! s = splbio(); for (cb = TAILQ_FIRST(&ki->kaio_bufdone); cb; --- 1512,1519 ---- return 0; } } ! splx(s); ! s = splbio(); for (cb = TAILQ_FIRST(&ki->kaio_bufdone); cb; *************** *** 1466,1471 **** --- 1591,1597 ---- ijoblist[njoblist] = fuword(&cbp->_aiocb_private.kernelinfo); njoblist++; } + if (njoblist == 0) { zfree(aiol_zone, ijoblist); zfree(aiol_zone, ujoblist); *************** *** 1565,1579 **** --- 1692,1710 ---- } } + s=splnet(); + for (cb = TAILQ_FIRST(&ki->kaio_jobqueue); cb; cb = TAILQ_NEXT(cb, plist)) { if (((intptr_t) cb->uaiocb._aiocb_private.kernelinfo) == jobref) { p->p_retval[0] = EINPROGRESS; + splx(s); return 0; } } + splx(s); s = splbio(); for (cb = TAILQ_FIRST(&ki->kaio_bufdone); *************** *** 2009,2011 **** --- 2140,2215 ---- } splx(s); } + + int + aio_waitcomplete(struct proc *p, struct aio_waitcomplete_args *uap) + { + struct timeval atv; + struct timespec ts; + struct aiocb **cbptr; + struct kaioinfo *ki; + struct aiocblist *cb=NULL; + int error, s, timo; + + timo = 0; + if (uap->timeout) { + /* + * Get timespec struct + */ + error = copyin((caddr_t) uap->timeout, (caddr_t) &ts, sizeof(ts)); + if (error) + return error; + + if ((ts.tv_nsec < 0) || (ts.tv_nsec >= 1000000000)) + return (EINVAL); + + TIMESPEC_TO_TIMEVAL(&atv, &ts); + if (itimerfix(&atv)) + return (EINVAL); + timo = tvtohz(&atv); + } + + ki = p->p_aioinfo; + if (ki == NULL) + return EAGAIN; + + cbptr = uap->aiocbp; + + while (1) { + if ((cb = TAILQ_FIRST(&ki->kaio_jobdone)) != 0) { + suword(uap->aiocbp,(int)cb->uuaiocb); + p->p_retval[0] = cb->uaiocb._aiocb_private.status; + if (cb->uaiocb.aio_lio_opcode == LIO_WRITE) { + curproc->p_stats->p_ru.ru_oublock += cb->outputcharge; + cb->outputcharge = 0; + } else if (cb->uaiocb.aio_lio_opcode == LIO_READ) { + curproc->p_stats->p_ru.ru_inblock += cb->inputcharge; + cb->inputcharge = 0; + } + aio_free_entry(cb); + return 0; + } + + s=splbio(); + if (( cb = TAILQ_FIRST(&ki->kaio_bufdone)) != 0 ) { + splx(s); + suword(uap->aiocbp,(int)cb->uuaiocb); + p->p_retval[0] = cb->uaiocb._aiocb_private.status; + aio_free_entry(cb); + return 0; + } + splx(s); + + ki->kaio_flags |= KAIO_WAKEUP; + error = tsleep(p, PRIBIO|PCATCH, "aiowc", timo); + + if (error < 0) { + return error; + } else if (error == EINTR) { + return EINTR; + } else if (error == EWOULDBLOCK) { + return EAGAIN; + } + } + } + >Release-Note: >Audit-Trail: >Unformatted: To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-bugs" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199906061420.OAA13268>