When you provided a string with junk after the terminating nul to a
UNIX domain socket and used bind(2), the canonical path function would
not properly terminate the new string. This caused VFS to return
ENAMETOOLONG on an otherwise valid path name.
Test case is added to test56.
Change-Id: I883b6be23d9e4ea13c3cee28cbb3726343df037f
REQ_PEEK behaves just like REQ_READ except that it does not copy
data anywhere, just obtains the blocks from the FS into the cache.
To be used by the future mmap implementation.
Change-Id: I1b56de304f0a7152b69a72c8962d04258adb44f9
The build system distinction between "bootprog" and "service" is
meaningless as boot programs are standard services.
As minix.service.mk simply imports minix.bootprog.mk, reduce confusion
by removing minix.bootprog.mk and placing the rules in minix.service.mk.
Change-Id: I4056b1e574bed59a8c890239b41b1a7c7cad63e8
Remove old versions of system calls and system calls that don't have
a libc api interface anymore (dup, dup2, creat).
VFS still contains support for old system call numbers for the new stat
system calls (i.e., 65, 66, 67) to keep supporting old binaries built for
MINIX 3.2.1 (prior to the release).
Change-Id: I721779b58a50c7eeae20669de24658d55d69b25b
libchardriver does not support DEV_REOPEN and will return ERESTART
when you do try it. This made VFS unhappy and concluded erroneously
that the driver was EDEADEPT.
if an exec() fails partway through reading in the sections, the target
process is already gone and a defunct process remains. sanity checking
the binary beforehand helps that.
test10 mutilates binaries and exec()s them on purpose; making an exec()
fail cleanly in such cases seems like acceptable behaviour.
fixes test10 on ARM.
Change-Id: I1ed9bb200ce469d4d349073cadccad5503b2fcb0
* Updating common/lib
* Updating lib/csu
* Updating lib/libc
* Updating libexec/ld.elf_so
* Corrected test on __minix in featuretest to actually follow the
meaning of the comment.
* Cleaned up _REENTRANT-related defintions.
* Disabled -D_REENTRANT for libfetch
* Removing some unneeded __NBSD_LIBC defines and tests
Change-Id: Ic1394baef74d11b9f86b312f5ff4bbc3cbf72ce2
This patch uses stricter locking for REQ_LINK, REQ_MKDIR, REQ_MKNOD,
REQ_RENAME, REQ_RMDIR, REQ_SLINK and REQ_UNLINK. For all requests, VFS
locks the directory in which we add or remove an inode with VNODE_WRITE.
I.e., the operations have exclusive access to that directory.
Furthermore, REQ_CHOWN, REQ_CHMOD, and REQ_FTRUNC now lock the vmnt
VMNT_READ; VMNT_WRITE was unnecessary.
Because pipes have no file position. VFS maintained (file) offsets into a
buffer internal to PFS and stored them in vnodes for simplicity, mixing
the responsibilities of filp and vnode objects.
With this patch PFS ignores the position field in REQ_READ and REQ_WRITE
requests making VFS' job a lot simpler.
.sync and fsync used unnecessarily restrictive locking type
.fsync violated locking order by obtaining a vmnt lock after a filp lock
.fsync contained a TOCTOU bug
.new_node violated locking rules (didn't upgrade lock upon file creation)
.do_pipe used unnecessarily restrictive locking type
.always lock pipes exclusively; even a read operation might require to do
a write on a vnode object (update pipe size)
.when opening a file with O_TRUNC, upgrade vnode lock when truncating
.utime used unnecessarily restrictive locking type
.path parsing:
.always acquire VMNT_WRITE or VMNT_EXCL on vmnt and downgrade to
VMNT_READ if that was what was actually requested. This prevents the
following deadlock scenario:
thread A:
lock_vmnt(vmp, TLL_READSER);
lock_vnode(vp, TLL_READSER);
upgrade_vmnt_lock(vmp, TLL_WRITE);
thread B:
lock_vmnt(vmp, TLL_READ);
lock_vnode(vp, TLL_READSER);
thread A will be stuck in upgrade_vmnt_lock and thread B is stuck in
lock_vnode. This happens when, for example, thread A tries create a
new node (open.c:new_node) and thread B tries to do eat_path to
change dir (stadir.c:do_chdir). When the path is being resolved, a
vnode is always locked with VNODE_OPCL (TLL_READSER) and then
downgraded to VNODE_READ if read-only is actually requested. Thread
A locks the vmnt with VMNT_WRITE (TLL_READSER) which still allows
VMNT_READ locks. Thread B can't acquire a lock on the vnode because
thread A has it; Thread A can't upgrade its vmnt lock to VMNT_WRITE
(TLL_WRITE) because thread B has a VMNT_READ lock on it.
By serializing vmnt locks during path parsing, thread B can only
acquire a lock on vmp when thread A has completely finished its
operation.
mount.c: In function 'mount_pfs':
mount.c:395:17: error: variable 'rfp' set but not used [-Werror=unused-but-set-variable]
Change-Id: I2f22590ab4e3a4a1678e9096626ebca53d2660e6
new_node makes the assumption that when it does last_dir on a path, a
successive advance would not yield a lock on a vmnt, because last_dir
already locked the vmnt. This is true except when last_dir resolves
to a directory on the parent vmnt of the file that was the result of
advance. For example,
# cd /
# echo foo > home
where home is on a different (sub) partition than / is (default
install). last_dir would resolve to / and advance would resolve to
/home.
With this change, last_dir resolves to the root node on the /home
partition, making the assumption valid again.
The VFS/FS protocol does not require the file server to supply a
special device node number in response to a REQ_CREATE request, as
this call creates only regular files. Therefore, VFS should not
erroneously save this piece of information from the REQ_CREATE reply
either.
Upon reboot VFS semi-exits all processes and unmounts the file system.
However, upon unmount, exiting FUSE file systems might need service from
the file system (due to libc). As the FUSE process is halfway the exit
procedure, it doesn't have a valid root directory and working directory.
Trying to do system calls then triggers a sanity check in VFS.
This fix first exits normal processes which should then allow for
unmounting FUSE file systems. Then VFS exits all processes including
File Servers and unmounts the rest of the file system.
There is a deadlock vulnerability when there are no worker threads
available and all of them blocked on a worker thread that's waiting for a
reply from a driver or a reply from an FS that needs to make a back call. In
these cases the deadlock resolver thread should kick in, but didn't in all
cases. Moreover, POSIX calls from File Servers weren't handled properly
anymore, which also could lead to deadlocks.
The check_bsf() macro uses assert(mutex_trylock(&bsf_lock)) and
assumes bsf_lock is locked afterwards. This breaks when compiling
with NOASSERTS="yes". Also: macro to function transition.
. whenever this function is called, pm will expect
the process to be cleaned up
. so don't abort the process entirely on error
. fixes a later 'forking on top of in-use child' vfs panic
By decoupling synchronous drivers from VFS, we are a big step closer to
supporting driver crashes under all circumstances. That is, VFS can't
become stuck on IPC with a synchronous driver (e.g., INET) and can
recover from crashing block drivers during open/close/ioctl or during
communication with an FS.
In order to maintain serialized communication with a synchronous driver,
the communication is wrapped by a mutex on a per driver basis (not major
numbers as there can be multiple majors with identical endpoints). Majors
that share a driver endpoint point to a single mutex object.
In order to support crashes from block drivers, the file reopen tactic
had to be changed; first reopen files associated with the crashed
driver, then send the new driver endpoint to FSes. This solves a
deadlock between the FS and the block driver;
- VFS would send REQ_NEW_DRIVER to an FS, but he FS only receives it
after retrying the current request to the newly started driver.
- The block driver would refuse the retried request until all files
had been reopened.
- VFS would reopen files only after getting a reply from the initial
REQ_NEW_DRIVER.
When a character special driver crashes, all associated files have to
be marked invalid and closed (or reopened if flagged as such). However,
they can only be closed if a thread holds exclusive access to it. To
obtain exclusive access, the worker thread (which handles the new driver
endpoint event from DS) schedules a new job to garbage collect invalid
files. This way, we can signal the worker thread that was talking to the
crashed driver and will release exclusive access to a file associated
with the crashed driver and prevent the garbage collecting worker thread
from dead locking on that file.
Also, when a character special driver crashes, RS will unmap the driver
and remap it upon restart. During unmapping, associated files are marked
invalid instead of waiting for an endpoint up event from DS, as that
event might come later than new read/write/select requests and thus
cause confusion in the freshly started driver.
When locking a filp, the usage counters are no longer checked. The usage
counter can legally go down to zero during filp invalidation while there
are locks pending.
DS events are handled by a separate worker thread instead of the main
thread as reopening files could lead to another crash and a stuck thread.
An additional worker thread is then necessary to unlock it.
Finally, with everything asynchronous a race condition in do_select
surfaced. A select entry was only marked in use after succesfully sending
initial select requests to drivers and having to wait. When multiple
select() calls were handled there was opportunity that these entries
were overwritten. This had as effect that some select results were
ignored (and select() remained blocking instead if returning) or do_select
tried to access filps that were not present (because thrown away by
secondary select()). This bug manifested itself with sendrecs, but was
very hard to reproduce. However, it became awfully easy to trigger with
asynsends only.
. ld.so is linked at 0 but it can relocate itself; we
wish to load ld.so higher though to trap NULL dereferences.
if we know we have to execute ld.so, vfs tells libexec to put it
higher.
When VFS runs out of vnodes after closing a vnode in opcl, common_open
will try to unlock a vnode through unlock_filp that has already been
unlocked in clone_opcl. By first obtaining and locking a new vnode this
situation is prevented; if there are no free vnodes, common_open will
unlock a still locked vnode.
. some strncpy/strcpy to strlcpy conversions
. new <minix/param.h> to avoid including other minix headers
that have colliding definitions with library and commands code,
causing parse warnings
. removed some dead code / assignments