* Removed startup code patches in lib/csu regarding kernel to userland
ABI.
* Aligned stack layout on NetBSD stack layout.
* Generate valid stack pointers instead of offsets by taking into account
_minix_kerninfo->kinfo->user_sp.
* Refactored stack generation, by moving part of execve in two
functions {minix_stack_params(), minix_stack_fill()} and using them
in execve(), rs and vm.
* Changed load offset of rtld (ld.so) to:
execi.args.stack_high - execi.args.stack_size - 0xa00000
which is 10MB below the main executable stack.
Change-Id: I839daf3de43321cded44105634102d419cb36cec
. turns on mmap() functionality for files by default
. also causes exec() to use it to map in executables
without copying and with sharing those pages with the
disk cache and other instances of the executable
Change-Id: Idb94dfe110eed916cf83b12c45e1a77241a2cee5
The VM server now manages its call masks such that all user processes
share the same call mask. As a result, an update for the call mask of
any user process will apply to all user processes. This is similar to
the privilege infrastructure employed by the kernel, and may serve as
a template for similar fine-grained restrictions in other servers.
Concretely, this patch fixes the problem of "service edit init" not
applying the given VM call mask to user processes started from RC
scripts during system startup.
In addition, this patch makes RS set a proper VM call mask for each
recovery script it spawns.
Change-Id: I520a30d85a0d3f3502d2b158293a2258825358cf
Implement getrusage.
These fields of struct rusage are not supported and always set to zero at this time
long ru_nswap; /* swaps */
long ru_inblock; /* block input operations */
long ru_oublock; /* block output operations */
long ru_msgsnd; /* messages sent */
long ru_msgrcv; /* messages received */
long ru_nvcsw; /* voluntary context switches */
long ru_nivcsw; /* involuntary context switches */
test75.c is the unit test for this new function
Change-Id: I3f1eb69de1fce90d087d76773b09021fc6106539
. test74 for mmap functionality
. vm: add a mem_file memory type that specifies an mmap()ped
memory range, backed by a file
. add fdref, an object that keeps track of FD references within
VM per process and so knows how to de-duplicate the use of FD's
by various mmap()ped ranges; there can be many more than there can
be FD's
. turned off for now, enable with 'filemap=1' as boot option
Change-Id: I640b1126cdaa522a0560301cf6732b7661555672
In libexec, split the memory allocation method into cleared and
non-cleared. Cleared gives zeroed memory, non-cleared gives 'junk'
memory (that will be overwritten anyway, and so needn't be cleared)
that is faster to get.
Also introduce the 'memmap' method that can be used, if available,
to map code and data from executables into a process using the
third-party mmap() mode.
Change-Id: I26694fd3c21deb8b97e01ed675dfc14719b0672b
Primary purpose of change: to support the mmap implementation, VM must
know both (a) about some block metadata for FS cache blocks, i.e.
inode numbers and inode offsets where applicable; and (b) know about
*all* cache blocks, i.e. also of the FS primary caches and not just
the blocks that spill into the secondary one. This changes the
interface and VM data structures.
This change is only for the interface (libminixfs) and VM data
structures; the filesystem code is unmodified, so although the
secondary cache will be used as normal, blocks will not be annotated
with inode information until the FS is modified to provide this
information. Until it is modified, mmap of files will fail gracefully
on such filesystems.
This is indicated to VFS/VM by returning ENOSYS for REQ_PEEK.
Change-Id: I1d2df6c485e6c5e89eb28d9055076cc02629594e
This commit removes the secondary cache code implementation from
VM and its usage from libminixfs. It is to be replaced by a new
implementation.
Change-Id: I8fa3af06330e7604c7e0dd4cbe39d3ce353a05b1
. data structure that automatically keeps a set
of pages in reserve, to replace sparepages and
possibly re-used in the future for similar situations,
e.g. if in-filesystem-cache block eviction is
implemented and FS asks for a new block
Change-Id: I149d46c14b9c8e75df16cb94e08907f008c339a6
. the total amount of memory in the system didn't include the memory
used by the boot-time modules and some dynamic allocation by the
kernel at boot time (to map in VM). especially apparent on our
ARM board with 'only' 512MB of memory and a huge ramdisk.
. also: *add* the VM loaded module to the freelist after it has
been allocated for & mapped in instead of cutting it *out* of the
freelist! so we get a few more MB free..
Change-Id: If37ac32b21c9d38610830e21421264da4f20bc4f
if an exec() fails partway through reading in the sections, the target
process is already gone and a defunct process remains. sanity checking
the binary beforehand helps that.
test10 mutilates binaries and exec()s them on purpose; making an exec()
fail cleanly in such cases seems like acceptable behaviour.
fixes test10 on ARM.
Change-Id: I1ed9bb200ce469d4d349073cadccad5503b2fcb0
Introduce explicit abstractions for different mapping types,
handling the instantiation, forking, pagefaults and freeing of
anonymous memory, direct physical mappings, shared memory and
physically contiguous anonymous memory as separate types, making
region.c more generic.
Also some other genericification like merging the 3 munmap cases
into one.
COW and SMAP safemap code is still implicit in region.c.
. add cpufeature detection of both
. use it for both ipc and kernelcall traps, using a register
for call number
. SYSENTER/SYSCALL does not save any context, therefore userland
has to save it
. to accomodate multiple kernel entry/exit types, the entry
type is recorded in the process struct. hitherto all types
were interrupt (soft int, exception, hard int); now SYSENTER/SYSCALL
is new, with the difference that context is not fully restored
from proc struct when running the process again. this can't be
done as some information is missing.
. complication: cases in which the kernel has to fully change
process context (i.e. sigreturn). in that case the exit type
is changed from SYSENTER/SYSEXIT to soft-int (i.e. iret) and
context is fully restored from the proc struct. this does mean
the PC and SP must change, as the sysenter/sysexit userland code
will otherwise try to restore its own context. this is true in the
sigreturn case.
. override all usage by setting libc_ipc=1
complete munmap implementation; single-page references made
a general munmap() implementation possible to write cleanly.
. memory: let the MIOCRAMSIZE ioctl set the imgrd device
size (but only to 0)
. let the ramdisk command set sizes to 0
. use this command to set /dev/imgrd to 0 after mounting /usr
in /etc/rc, so the boot time ramdisk is freed (about 4MB
currently)
. some strncpy/strcpy to strlcpy conversions
. new <minix/param.h> to avoid including other minix headers
that have colliding definitions with library and commands code,
causing parse warnings
. removed some dead code / assignments
This commit removes all traces of Minix segments (the text/data/stack
memory map abstraction in the kernel) and significance of Intel segments
(hardware segments like CS, DS that add offsets to all addressing before
page table translation). This ultimately simplifies the memory layout
and addressing and makes the same layout possible on non-Intel
architectures.
There are only two types of addresses in the world now: virtual
and physical; even the kernel and processes have the same virtual
address space. Kernel and user processes can be distinguished at a
glance as processes won't use 0xF0000000 and above.
No static pre-allocated memory sizes exist any more.
Changes to booting:
. The pre_init.c leaves the kernel and modules exactly as
they were left by the bootloader in physical memory
. The kernel starts running using physical addressing,
loaded at a fixed location given in its linker script by the
bootloader. All code and data in this phase are linked to
this fixed low location.
. It makes a bootstrap pagetable to map itself to a
fixed high location (also in linker script) and jumps to
the high address. All code and data then use this high addressing.
. All code/data symbols linked at the low addresses is prefixed by
an objcopy step with __k_unpaged_*, so that that code cannot
reference highly-linked symbols (which aren't valid yet) or vice
versa (symbols that aren't valid any more).
. The two addressing modes are separated in the linker script by
collecting the unpaged_*.o objects and linking them with low
addresses, and linking the rest high. Some objects are linked
twice, once low and once high.
. The bootstrap phase passes a lot of information (e.g. free memory
list, physical location of the modules, etc.) using the kinfo
struct.
. After this bootstrap the low-linked part is freed.
. The kernel maps in VM into the bootstrap page table so that VM can
begin executing. Its first job is to make page tables for all other
boot processes. So VM runs before RS, and RS gets a fully dynamic,
VM-managed address space. VM gets its privilege info from RS as usual
but that happens after RS starts running.
. Both the kernel loading VM and VM organizing boot processes happen
using the libexec logic. This removes the last reason for VM to
still know much about exec() and vm/exec.c is gone.
Further Implementation:
. All segments are based at 0 and have a 4 GB limit.
. The kernel is mapped in at the top of the virtual address
space so as not to constrain the user processes.
. Processes do not use segments from the LDT at all; there are
no segments in the LDT any more, so no LLDT is needed.
. The Minix segments T/D/S are gone and so none of the
user-space or in-kernel copy functions use them. The copy
functions use a process endpoint of NONE to realize it's
a physical address, virtual otherwise.
. The umap call only makes sense to translate a virtual address
to a physical address now.
. Segments-related calls like newmap and alloc_segments are gone.
. All segments-related translation in VM is gone (vir2map etc).
. Initialization in VM is simpler as no moving around is necessary.
. VM and all other boot processes can be linked wherever they wish
and will be mapped in at the right location by the kernel and VM
respectively.
Other changes:
. The multiboot code is less special: it does not use mb_print
for its diagnostics any more but uses printf() as normal, saving
the output into the diagnostics buffer, only printing to the
screen using the direct print functions if a panic() occurs.
. The multiboot code uses the flexible 'free memory map list'
style to receive the list of free memory if available.
. The kernel determines the memory layout of the processes to
a degree: it tells VM where the kernel starts and ends and
where the kernel wants the top of the process to be. VM then
uses this entire range, i.e. the stack is right at the top,
and mmap()ped bits of memory are placed below that downwards,
and the break grows upwards.
Other Consequences:
. Every process gets its own page table as address spaces
can't be separated any more by segments.
. As all segments are 0-based, there is no distinction between
virtual and linear addresses, nor between userspace and
kernel addresses.
. Less work is done when context switching, leading to a net
performance increase. (8% faster on my machine for 'make servers'.)
. The layout and configuration of the GDT makes sysenter and syscall
possible.
. all invocations were S or D, so can safely be dropped
to prepare for the segmentless world
. still assign D to the SCP_SEG field in the message
to make previous kernels usable
. make exec() callers (i.e. vfs and rs) determine the
memory layout by explicitly reserving regions using
mmap() calls on behalf of the exec()ing process,
i.e. handling all of the exec logic, thereby eliminating
all special exec() knowledge from VM.
. the new procedure is: clear the exec()ing process
first, then call third-party mmap()s to reserve memory, then
copy the executable file section contents in, all using callbacks
tailored to the caller's way of starting an executable
. i.e. no more explicit EXEC_NEWMEM-style calls in PM or VM
as with rigid 2-section arguments
. this naturally allows generalizing exec() by simply loading
all ELF sections
. drop/merge of lots of duplicate exec() code into libexec
. not copying the code sections to vfs and into the executable
again is a measurable performance improvement (about 3.3% faster
for 'make' in src/servers/)
these two functions will be used to support all exec() functionality
going into a single library shared by RS and VFS and exec() knowledge
leaving VM.
. third-party mmap: allow certain processes (VFS, RS) to
do mmap() on behalf of another process
. PROCCTL: used to free and clear a process' address space
. ipc wants to know about processes that get
signals, so that it can break blocking ipc operations
. doing it for every single signal is wasteful
and causes the annoying 'no slot for signals' message
. this fix tells vm on a per-process basis it (ipc)
wants to be notified, i.e. only when it does any ipc calls
. move ipc config to separate config file while we're at it
A new call to vm lets processes yield a part of their memory to vm,
together with an id, getting newly allocated memory in return. vm is
allowed to forget about it if it runs out of memory. processes can ask
for it back using the same id. (These two operations are normally
combined in a single call.)
It can be used as a as-big-as-memory-will-allow block cache for
filesystems, which is how mfs now uses it.
this patch changes the way pagefaults are delivered to VM. It adopts
the same model as the out-of-quantum messages sent by kernel to a
scheduler.
- everytime a userspace pagefault occurs, kernel creates a message
which is sent to VM on behalf of the faulting process
- the process is blocked on delivery to VM in the standard IPC code
instead of waiting in a spacial in-kernel queue (stack) and is not
runnable until VM tell kernel that the pagefault is resolved and is
free to clear the RTS_PAGEFAULT flag.
- VM does not need call kernel and poll the pagefault information
which saves many (1/2?) calls and kernel calls that return "no more
data"
- VM notification by kernel does not need to use signals
- each entry in proc table is by 12 bytes smaller (~3k save)
map_copy_ph_block is replaced by map_clone_ph_block, which can
replace a single physical block by multiple physical blocks.
also,
. merge map_mem.c with region.c, as they manipulate the same
data structures
. NOTRUNNABLE removed as sanity check
. use direct functions for ALLOC_MEM and FREE_MEM again
. add some checks to shared memory mapping code
. fix for data structure integrity when using shared memory
. fix sanity checks