The main purpose of this patch is to fix handling of unpause calls
from PM while another call is ongoing. The solution to this problem
sparked a full revision of the threading model, consisting of a large
number of related changes:
- all active worker threads are now always associated with a process,
and every process has at most one active thread working for it;
- the process lock is always held by a process's worker thread;
- a process can now have both normal work and postponed PM work
associated to it;
- timer expiry and non-postponed PM work is done from the main thread;
- filp garbage collection is done from a thread associated with VFS;
- reboot calls from PM are now done from a thread associated with PM;
- the DS events handler is protected from starting multiple threads;
- support for a system worker thread has been removed;
- the deadlock recovery thread has been replaced by a parameter to the
worker_start() function; the number of worker threads has
consequently been increased by one;
- saving and restoring of global but per-thread variables is now
centralized in worker_suspend() and worker_resume(); err_code is now
saved and restored in all cases;
- the concept of jobs has been removed, and job_m_in now points to a
message stored in the worker thread structure instead;
- the PM lock has been removed;
- the separate exec lock has been replaced by a lock on the VM
process, which was already being locked for exec calls anyway;
- PM_UNPAUSE is now processed as a postponed PM request, from a thread
associated with the target process;
- the FP_DROP_WORK flag has been removed, since it is no longer more
than just an optimization and only applied to processes operating on
a pipe when getting killed;
- assignment to "fp" now takes place only when obtaining new work in
the main thread or a worker thread, when resuming execution of a
thread, and in the special case of exiting processes during reboot;
- there are no longer special cases where the yield() call is used to
force a thread to run.
Change-Id: I7a97b9b95c2450454a9b5318dfa0e6150d4e6858
Implement getrusage.
These fields of struct rusage are not supported and always set to zero at this time
long ru_nswap; /* swaps */
long ru_inblock; /* block input operations */
long ru_oublock; /* block output operations */
long ru_msgsnd; /* messages sent */
long ru_msgrcv; /* messages received */
long ru_nvcsw; /* voluntary context switches */
long ru_nivcsw; /* involuntary context switches */
test75.c is the unit test for this new function
Change-Id: I3f1eb69de1fce90d087d76773b09021fc6106539
There is a deadlock vulnerability when there are no worker threads
available and all of them blocked on a worker thread that's waiting for a
reply from a driver or a reply from an FS that needs to make a back call. In
these cases the deadlock resolver thread should kick in, but didn't in all
cases. Moreover, POSIX calls from File Servers weren't handled properly
anymore, which also could lead to deadlocks.
By decoupling synchronous drivers from VFS, we are a big step closer to
supporting driver crashes under all circumstances. That is, VFS can't
become stuck on IPC with a synchronous driver (e.g., INET) and can
recover from crashing block drivers during open/close/ioctl or during
communication with an FS.
In order to maintain serialized communication with a synchronous driver,
the communication is wrapped by a mutex on a per driver basis (not major
numbers as there can be multiple majors with identical endpoints). Majors
that share a driver endpoint point to a single mutex object.
In order to support crashes from block drivers, the file reopen tactic
had to be changed; first reopen files associated with the crashed
driver, then send the new driver endpoint to FSes. This solves a
deadlock between the FS and the block driver;
- VFS would send REQ_NEW_DRIVER to an FS, but he FS only receives it
after retrying the current request to the newly started driver.
- The block driver would refuse the retried request until all files
had been reopened.
- VFS would reopen files only after getting a reply from the initial
REQ_NEW_DRIVER.
When a character special driver crashes, all associated files have to
be marked invalid and closed (or reopened if flagged as such). However,
they can only be closed if a thread holds exclusive access to it. To
obtain exclusive access, the worker thread (which handles the new driver
endpoint event from DS) schedules a new job to garbage collect invalid
files. This way, we can signal the worker thread that was talking to the
crashed driver and will release exclusive access to a file associated
with the crashed driver and prevent the garbage collecting worker thread
from dead locking on that file.
Also, when a character special driver crashes, RS will unmap the driver
and remap it upon restart. During unmapping, associated files are marked
invalid instead of waiting for an endpoint up event from DS, as that
event might come later than new read/write/select requests and thus
cause confusion in the freshly started driver.
When locking a filp, the usage counters are no longer checked. The usage
counter can legally go down to zero during filp invalidation while there
are locks pending.
DS events are handled by a separate worker thread instead of the main
thread as reopening files could lead to another crash and a stuck thread.
An additional worker thread is then necessary to unlock it.
Finally, with everything asynchronous a race condition in do_select
surfaced. A select entry was only marked in use after succesfully sending
initial select requests to drivers and having to wait. When multiple
select() calls were handled there was opportunity that these entries
were overwritten. This had as effect that some select results were
ignored (and select() remained blocking instead if returning) or do_select
tried to access filps that were not present (because thrown away by
secondary select()). This bug manifested itself with sendrecs, but was
very hard to reproduce. However, it became awfully easy to trigger with
asynsends only.
. generalize libexec slightly to get some more necessary information
from ELF files, e.g. the interpreter
. execute dynamically linked executables when exec()ed by VFS
. switch to netbsd variant of elf32.h exclusively, solves some
conflicting headers
- Revise VFS-FS protocol and update VFS/MFS/ISOFS accordingly.
- Clean up MFS by removing old, dead code (backwards compatibility is broken by
the new VFS-FS protocol, anyway) and rewrite other parts. Also, make sure all
functions have proper banners and prototypes.
- VFS should always provide a (syntactically) valid path to the FS; no need for
the FS to do sanity checks when leaving/entering mount points.
- Fix several bugs in MFS:
- Several path lookup bugs in MFS.
- A link can be too big for the path buffer.
- A mountpoint can become inaccessible when the creation of a new inode
fails, because the inode already exists and is a mountpoint.
- Introduce support for supplemental groups.
- Add test 46 to test supplemental group functionality (and removed obsolete
suppl. tests from test 2).
- Clean up VFS (not everything is done yet).
- ISOFS now opens device read-only. This makes the -r flag in the mount command
unnecessary (but will still report to be mounted read-write).
- Introduce PipeFS. PipeFS is a new FS that handles all anonymous and
named pipes. However, named pipes still reside on the (M)FS, as they are part
of the file system on disk. To make this work VFS now has a concept of
'mapped' inodes, which causes read, write, truncate and stat requests to be
redirected to the mapped FS, and all other requests to the original FS.
- all macros in consts.h that depend on NR_TASKS replaced by a FP_BLOCKED_ON_*
- fp_suspended removed and replaced by fp_blocked_on. Testing whether a process
is supended is qeual to testing whether fp_blocked_on is FP_BLOCKED_ON_NONE or
not
- fp_task is valid only if fp_blocked_on == FP_BLOCKED_ON_OTHER
- no need of special values that do not colide with valid and special endpoints
since they are not used as endpoints anymore
- suspend only takes FP_BLOCKED_ON_* values not endpoints anymore
- suspend(task) replaced by wait_for(task) which sets fp_task so we remember who
are we waiting for and suspend sets fp_blocked_on to FP_BLOCKED_ON_OTHER to
signal that we are waiting for some other process
- some functions should take endpoint_t instead of int, fixed
if the process was REVIVING. (susp_count doesn't count those
processes.) this together with dev_io SELECT suspend side effect
for asynch. character devices solves the hanging pipe bug. or
at last vastly improves it.
added sanity checks, turned off by default.
made the {NOT_,}{SUSPENDING,REVIVING} constants weirder to
help sanity checking.