minix/servers/vfs/fproc.h
Thomas Veerman 992799b91f VFS: make all IPC asynchronous
By decoupling synchronous drivers from VFS, we are a big step closer to
supporting driver crashes under all circumstances. That is, VFS can't
become stuck on IPC with a synchronous driver (e.g., INET) and can
recover from crashing block drivers during open/close/ioctl or during
communication with an FS.

In order to maintain serialized communication with a synchronous driver,
the communication is wrapped by a mutex on a per driver basis (not major
numbers as there can be multiple majors with identical endpoints). Majors
that share a driver endpoint point to a single mutex object.

In order to support crashes from block drivers, the file reopen tactic
had to be changed; first reopen files associated with the crashed
driver, then send the new driver endpoint to FSes. This solves a
deadlock between the FS and the block driver;
  - VFS would send REQ_NEW_DRIVER to an FS, but he FS only receives it
    after retrying the current request to the newly started driver.
  - The block driver would refuse the retried request until all files
    had been reopened.
  - VFS would reopen files only after getting a reply from the initial
    REQ_NEW_DRIVER.

When a character special driver crashes, all associated files have to
be marked invalid and closed (or reopened if flagged as such). However,
they can only be closed if a thread holds exclusive access to it. To
obtain exclusive access, the worker thread (which handles the new driver
endpoint event from DS) schedules a new job to garbage collect invalid
files. This way, we can signal the worker thread that was talking to the
crashed driver and will release exclusive access to a file associated
with the crashed driver and prevent the garbage collecting worker thread
from dead locking on that file.

Also, when a character special driver crashes, RS will unmap the driver
and remap it upon restart. During unmapping, associated files are marked
invalid instead of waiting for an endpoint up event from DS, as that
event might come later than new read/write/select requests and thus
cause confusion in the freshly started driver.

When locking a filp, the usage counters are no longer checked. The usage
counter can legally go down to zero during filp invalidation while there
are locks pending.

DS events are handled by a separate worker thread instead of the main
thread as reopening files could lead to another crash and a stuck thread.
An additional worker thread is then necessary to unlock it.

Finally, with everything asynchronous a race condition in do_select
surfaced. A select entry was only marked in use after succesfully sending
initial select requests to drivers and having to wait. When multiple
select() calls were handled there was opportunity that these entries
were overwritten. This had as effect that some select results were
ignored (and select() remained blocking instead if returning) or do_select
tried to access filps that were not present (because thrown away by
secondary select()). This bug manifested itself with sendrecs, but was
very hard to reproduce. However, it became awfully easy to trigger with
asynsends only.
2012-09-17 11:01:45 +00:00

73 lines
2.8 KiB
C

#ifndef __VFS_FPROC_H__
#define __VFS_FPROC_H__
#include "threads.h"
#include <sys/select.h>
#include <minix/safecopies.h>
/* This is the per-process information. A slot is reserved for each potential
* process. Thus NR_PROCS must be the same as in the kernel. It is not
* possible or even necessary to tell when a slot is free here.
*/
#define LOCK_DEBUG 0
EXTERN struct fproc {
unsigned fp_flags;
pid_t fp_pid; /* process id */
endpoint_t fp_endpoint; /* kernel endpoint number of this process */
struct vnode *fp_wd; /* working directory; NULL during reboot */
struct vnode *fp_rd; /* root directory; NULL during reboot */
struct filp *fp_filp[OPEN_MAX];/* the file descriptor table */
fd_set fp_filp_inuse; /* which fd's are in use? */
fd_set fp_cloexec_set; /* bit map for POSIX Table 6-2 FD_CLOEXEC */
dev_t fp_tty; /* major/minor of controlling tty */
int fp_blocked_on; /* what is it blocked on */
int fp_block_callnr; /* blocked call if rd/wr can't finish */
int fp_cum_io_partial; /* partial byte count if rd/wr can't finish */
endpoint_t fp_task; /* which task is proc suspended on */
endpoint_t fp_ioproc; /* proc no. in suspended-on i/o message */
cp_grant_id_t fp_grant; /* revoke this grant on unsuspend if > -1 */
uid_t fp_realuid; /* real user id */
uid_t fp_effuid; /* effective user id */
gid_t fp_realgid; /* real group id */
gid_t fp_effgid; /* effective group id */
int fp_ngroups; /* number of supplemental groups */
gid_t fp_sgroups[NGROUPS_MAX];/* supplemental groups */
mode_t fp_umask; /* mask set by umask system call */
mutex_t fp_lock; /* mutex to lock fproc object */
struct job fp_job; /* pending job */
thread_t fp_wtid; /* Thread ID of worker */
char fp_name[PROC_NAME_LEN]; /* Last exec() */
#if LOCK_DEBUG
int fp_vp_rdlocks; /* number of read-only locks on vnodes */
int fp_vmnt_rdlocks; /* number of read-only locks on vmnts */
#endif
} fproc[NR_PROCS];
/* fp_flags */
#define FP_NOFLAGS 00
#define FP_SUSP_REOPEN 01 /* Process is suspended until the reopens are
* completed (after the restart of a driver).
*/
#define FP_REVIVED 0002 /* Indicates process is being revived */
#define FP_SESLDR 0004 /* Set if process is session leader */
#define FP_PENDING 0010 /* Set if process has pending work */
#define FP_EXITING 0020 /* Set if process is exiting */
#define FP_PM_PENDING 0040 /* Set if process has pending PM request */
#define FP_SYS_PROC 0100 /* Set if process is a driver or FS */
#define FP_DROP_WORK 0200 /* Set if process won't accept new work */
/* Field values. */
#define NOT_REVIVING 0xC0FFEEE /* process is not being revived */
#define REVIVING 0xDEEAD /* process is being revived from suspension */
#define PID_FREE 0 /* process slot free */
#endif /* __VFS_FPROC_H__ */