. workaround for clang's stdint.h __STDC_HOSTED__ test
that causes the host stdint.h to be ignored for -ffreestanding,
causing a type to be double-defined in the kernel
. only use for single-page invalidations initially
. shows tiny but statistically significant performance
improvement; will be more helpful in certain VM debug
modes
. fold 2 exception-in-kernel cases (pagefault and rest)
into 1
. for exceptions that occur in kernel, don't just print
kernel stacktrace (typically that is just the exception
handler) but also the stacktrace of when the exception
happened
Now users can choose between libsys, libsys + libminc and
libsys + libc. E.g. PUFFS/FUSE servers need libsys + libc while
old servers can use libsys + libminc.
. remove a few asserts in the kernel and 64bi library
that are not compatible with the timing code
. change the TIME_BLOCKS code a little to work in-kernel
3 sets of libraries are built now:
. ack: all libraries that ack can compile (/usr/lib/i386/)
. clang+elf: all libraries with minix headers (/usr/lib/)
. clang+elf: all libraries with netbsd headers (/usr/netbsd/)
Once everything can be compiled with netbsd libraries and headers, the
/usr/netbsd hierarchy will be obsolete and its libraries compiled with
netbsd headers will be installed in /usr/lib, and its headers
in /usr/include. (i.e. minix libc and current minix headers set
will be gone.)
To use the NetBSD libc system (libraries + headers) before
it is the default libc, see:
http://wiki.minix3.org/en/DevelopersGuide/UsingNetBSDCode
This wiki page also documents the maintenance of the patch
files of minix-specific changes to imported NetBSD code.
Changes in this commit:
. libsys: Add NBSD compilation and create a safe NBSD-based libc.
. Port rest of libraries (except libddekit) to new header system.
. Enable compilation of libddekit with new headers.
. Enable kernel compilation with new headers.
. Enable drivers compilation with new headers.
. Port legacy commands to new headers and libc.
. Port servers to new headers.
. Add <sys/sigcontext.h> in compat library.
. Remove dependency file in tree.
. Enable compilation of common/lib/libc/atomic in libsys
. Do not generate RCSID strings in libc.
. Temporarily disable zoneinfo as they are incompatible with NetBSD format
. obj-nbsd for .gitignore
. Procfs: use only integer arithmetic. (Antoine Leca)
. Increase ramdisk size to create NBSD-based images.
. Remove INCSYMLINKS handling hack.
. Add nbsd_include/sys/exec_elf.h
. Enable ELF compilation with NBSD libc.
. Add 'make nbsdsrc' in tools to download reference NetBSD sources.
. Automate minix-port.patch creation.
. Avoid using fstavfs() as it is *extremely* slow and unneeded.
. Set err() as PRIVATE to avoid name clash with libc.
. [NBSD] servers/vm: remove compilation warnings.
. u32 is not a long in NBSD headers.
. UPDATING info on netbsd hierarchy
. commands fixes for netbsd libc
sys_umap now supports only:
- looking up the physical address of a virtual address in the address space
of the caller;
- looking up the physical address of a grant for which the caller is the
grantee.
This is enough for nearly all umap users. The new sys_umap_remote supports
lookups in arbitrary address spaces and grants for arbitrary grantees.
and minor fixes:
. add ack/clean target to lib, 'unify' clean target
. add includes as library dependency
. mk: exclude warning options clang doesn't have in non-gcc
. set -e in lib/*.sh build files
. clang compile error circumvention (disable NOASSERTS for release builds)
- time stops if there is no activity and the timer expired before
we halted the cpu
- restart_local_timer() checks if the timer has expired and if so it
restarts it
- we do the same when switching back to userspace
- skip processes that are not asynsending to the target
- do not clear whole asynsend table upon IPC permission error
- be more accepting when one table entry is bogus later on
Before safecopies, the IO_ENDPT and DL_ENDPT message fields were needed
to know which actual process to copy data from/to, as that process may
not always be the caller. Now that we have full safecopy support, these
fields have become useless for that purpose: the owner of the grant is
*always* the caller. Allowing the caller to supply another endpoint is
in fact dangerous, because the callee may then end up using a grant
from a third party. One could call this a variant of the confused
deputy problem.
From now on, safecopy calls should always use the caller's endpoint as
grant owner. This fully obsoletes the DL_ENDPT field in the
inet/ethernet protocol. IO_ENDPT has other uses besides identifying the
grant owner though. This patch renames IO_ENDPT to USER_ENDPT, not only
because that is a more fitting name (it should never be used for I/O
after all), but also in order to intentionally break any old system
source code outside the base system. If this patch breaks your code,
fixing it is fairly simple:
- DL_ENDPT should be replaced with m_source;
- IO_ENDPT should be replaced with m_source when used for safecopies;
- IO_ENDPT should be replaced with USER_ENDPT for any other use, e.g.
when setting REP_ENDPT, matching requests in CANCEL calls, getting
DEV_SELECT flags, and retrieving of the real user process's endpoint
in DEV_OPEN.
The changes in this patch are binary backward compatible.
completed (successfully or not). AMF_NOTIFY_ERR can be used if the sender
only wishes to be notified in case of an error (e.g., EDEADSRCDST). A new
endpoint ASYNCM will be the sender of the notification.
. helps debugging output; you can see the difference
between parent and child easily (it's sometimes
confusing to see an expected endpoint number with
an unexpected name, i.e. before exec())
. when processes crash after fork and before exec, it's
an instant hint that that's what's going on, instead of
it being the parent (endpoint numbers don't usually convey
this)
. name returns to 'normal' after exec(), so *F isn't visible
normally at all. (Except for for RS which forks apparently.)
Headers that will be shared between old includes and NetBSD-like includes
are moved into common/include tree. They are still copied in /usr/include
in 'make includes', so compilation and programs aren't be affected.
- kernel maintains a cpu_info array which contains various
information about each cpu as filled when each cpu boots
- the information contains idetification, features etc.
- flush TLB of processes only if the page tables has been changed and
the page tables of this process are already loaded on this cpu which
means that there might be stale entries in TLB. Until now SMP was
always flushing TLB to make sure everything is consistent.
- accidentaly this wasn't part of the SMP merge and the implementation
remained uncomplete with the timer keeping ticking periodically
- APIC timer is set for a signel shot and restarted everytime it
expires. This way we can keep the AP's trully idle
- the timer is restarted a little later before leaving to userspace
- LAPIC_TIMER_ICR is written before LAPIC_LVTTR so the newest value is
used
- fixed spurious and error interrupt handlers
- not to hog the system the warning isn't reported every time, just
once every 100 times, similarly for the spurious PIC interrupts
- a different set of MSRs and performance counters is used on AMD
- when initializing NMI watchdog the test for Intel architecture
performance counters feature only applies to Intel now
- NMI is enabled if the CPU belongs to a family which has the
performance counters that we use
- sometimes the system needs to know precisely on what type of cpu is
running. The cpu type id detected during arch specific
initialization and kept in the machine structure for later use.
- as a side-effect the information is exported to userland
- the Intel architecture cycle counter (performance counter) does not
count when the CPU is idle therefore we use busy loop instead of
halting the cpu when there is nothing to schedule
- the downside is that handling interrupts may be accounted as idle
time if a sample is taken before we get out of the nested trap and
pick a new process
- when profiling is compiled in kernel includes a 64M buffer for
sample
- 64M is the default used by profile tool as its buffer
- when using nmi profiling it is not possible to always copy sample
stright to userland as the nmi may (and does) happen in bad moments
- reduces sampling overhead as samples are copied out only when
profiling stops
- if profile --nmi kernel uses NMI watchdog based sampling based on
Intel architecture performance counters
- using NMI makes kernel profiling possible
- watchdog kernel lockup detection is disabled while sampling as we
may get unpredictable interrupts in kernel and thus possibly many
false positives
- if watchdog is not enabled at boot time, profiling enables it and
turns it of again when done
- when kernel profiles a process for the first time it saves an entry
describing the process [endpoint|name]
- every profile sample is only [endpoint|pc]
- profile utility creates a table of endpoint <-> name relations and
translates endpoints of samples into names and writing out the
results to comply with the processing tools
- "task" endpoints like KERNEL are negative thus we must cast it to
unsigned when hashing
- contributed by Bjorn Swift
- adds process accounting, for example counting the number of messages
sent, how often the process was preemted and how much time it spent
in the run queue. These statistics, along with the current cpu load,
are sent back to the user-space scheduler in the Out Of Quantum
message.
- the user-space scheduler may choose to make use of these statistics
when making scheduling decisions. For isntance the cpu load becomes
especially useful when scheduling on multiple cores.
- when a process is migrated to a different CPU it may have an active
FPU context in the processor registers. We must save it and migrate
it together with the process.
- EBADCPU is returned is scheduler tries to run a process on a CPU
that either does not exist or isn't booted
- this change was originally meant to deal with stupid cpuid
instruction which provides totally useless information about
hyper-threading and MPS which does not deal with ht at all. ACPI
provides correct information. If ht is turned off it looks like some
CPUs failed to boot. Nevertheless this patch may be handy for
testing/benchmarking in the future.
- this makes sure that each process always run with updated TLB
- this is the simplest way how to achieve the consistency. As it means
significant performace degradation when not require, this is nto the
final solution and will be refined
- RTS_VMINHIBIT flag is used to stop process while VM is fiddling with
its pagetables
- more generic way of sending synchronous scheduling events among cpus
- do the x-cpu smp sched calls only if the target process is runnable.
If it is not, it cannot be running and it cannot become runnable
this CPU holds the BKL
- APIC timer always reprogrammed if expired
- timer tick never happens when in kernel => never immediate return
from userspace to kernel because of a buffered interrupt
- renamed argument to lapic_set_timer_one_shot()
- removed arch_ prefix from timer functions
- any cpu can use smp_schedule() to tell another cpu to reschedule
- if an AP is idle, it turns off timer as there is nothing to
preempt, no need to wakeup just to go back to sleep again
- if a cpu makes a process runnable on an idle cpu, it must wake it up
to reschedule
- sys_schedule can change only selected values, -1 means that the
current value should be kept unchanged. For instance we mostly want
to change the scheduling quantum and priority but we want to keep
the process at the current cpu
- RS can hand off its processes to scheduler
- service can read the destination cpu from system.conf
- RS can pass the information farther
- pressing 'B' on the serial cnsole prints statistics for BKL per cpu.
- 'b' resets the counters
- it presents number of cycles each CPU spends in kernel, how many
cycyles it spends spinning while waiting for the BKL
- it shows optimistic estimation in how many cases we get the lock
immediately without spinning. As the test is not atomic the lock may
be already held by some other cpu before we actually try to acquire
it.
- cross-address space copies use these slots to map user memory for
kernel. This avoid any collisions between CPUs
- well, we only have a single CPU running at a time, this is just to
be safe for the future
- machine information contains the number of cpus and the bsp id
- a dummy SMP scheduler which keeps all system processes on BSP and
all other process on APs. The scheduler remembers how many processes
are assigned to each CPU and always picks the one with the least
processes for a new process.
- apic_send_ipi() to send inter-processor interrupts (IPIs)
- APIC IPI schedule and halt handlers to signal x-cpu that a cpu shold
reschedule or halt
- various little changes to let APs run
- no processes are scheduled at the APs and therefore they are idle
except being interrupted by a timer time to time
- tsc_ctr_switch is made cpu local
- although an x86 specific variable it must be declared globaly as the
cpulocal implementation does not allow otherwise
- each CPU has its own runqueues
- processes on BSP are put on the runqueues later after a switch to
the final stack when cpuid works to avoid special cases
- enqueue() and dequeue() use the run queues of the cpu the process is
assigned to
- pick_proc() uses the local run queues
- printing of per-CPU run queues ('2') on serial console
- APs configure local timers
- while configuring local APIC timer the CPUs fiddle with the interrupt
handlers. As the interrupt table is shared the BSP must not run
- APs wait until BSP turns paging on, it is not possible to safely
execute any code on APs until we can turn paging on as well as it
must be done synchronously everywhere
- APs turn paging on but do not continue and wait
- to isolate execution inside kernel we use a big kernel lock
implemented as a spinlock
- the lock is acquired asap after entering kernel mode and released as
late as possible. Only one CPU as a time can execute the core kernel
code
- measurement son real hw show that the overhead of this lock is close
to 0% of kernel time for the currnet system
- the overhead of this lock may be as high as 45% of kernel time in
virtual machines depending on the ratio between physical CPUs
available and emulated CPUs. The performance degradation is
significant
- kernel detects CPUs by searching ACPI tables for local apic nodes
- each CPU has its own TSS that points to its own stack. All cpus boot
on the same boot stack (in sequence) but switch to its private stack
as soon as they can.
- final booting code in main() placed in bsp_finish_booting() which is
executed only after the BSP switches to its final stack
- apic functions to send startup interrupts
- assembler functions to handle CPU features not needed for single cpu
mode like memory barries, HT detection etc.
- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
kernel/arch/i386/include/arch_smp.h
- 16-bit trampoline code for the APs. It is executed by each AP after
receiving startup IPIs it brings up the CPUs to 32bit mode and let
them spin in an infinite loop so they don't do any damage.
- implementation of kernel spinlock
- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
- most global variables carry information which is specific to the
local CPU and each CPU must have its own copy
- cpu local variable must be declared in cpulocal.h between
DECLARE_CPULOCAL_START and DECLARE_CPULOCAL_END markers using
DECLARE_CPULOCAL macro
- to access the cpu local data the provided macros must be used
get_cpu_var(cpu, name)
get_cpu_var_ptr(cpu, name)
get_cpulocal_var(name)
get_cpulocal_var_ptr(name)
- using this macros makes future changes in the implementation
possible
- switching to ELF will make the declaration of cpu local data much
simpler, e.g.
CPULOCAL int blah;
anywhere in the kernel source code
- kernel turns on IO APICs if no_apic is _not_ set or is equal 0
- pci driver must use the acpi driver to setup IRQ routing otherwise
the system cannot work correctly except systems like KVM that use
only legacy (E)ISA IRQs 0-15
- kernel exports DSDP (the root pointer where ACPI parsing starts) and
apic_enabled in the machine structure.
- ACPI driver uses DSDP to locate ACPI in memory. acpi_enabled tell
PCI driver to query ACPI for IRQ routing information.
- the ability for kernel to use ACPI tables to detect IO APICs. It is
the bare minimum the kernel needs to know about ACPI tables.
- it will be used to find out about processors as the MPS tables are
deprecated by ACPI and not all vendorsprovide them.
- kernel compile was broken with gcc as putchar() was added by gcc in
stacktrace.c
- add -fno-builtin everywhere to avoid such problems in the future
- -fno-builtin in kernel now redundant
- for better readability xpp is substitued by sender
- makes sure that the dequeued sender has p_q_link == NULL and that
this condition holds when enqueuing the sender again. This is a
sanity check to make sure that the new sender is not enqueued
already
- Before this change the dequeued sender's p_q_link may not be NULL
and it was only set to NULL when enqueued again
- removes p_delivermsg_lin item from the process structure and code
related to it
- as the send part, the receive does not need to use the
PHYS_COPY_CATCH() and umap_local() couple.
- The address space of the target process is installed before
delivermsg() is called.
- unlike the linear address, the virtual address does not change when
paging is turned on nor after fork().
- FPU context is stored only if conflict between 2 FPU users or while
exporting context of a process to userspace while it is the active
user of FPU
- FPU has its owner (fpu_owner) which points to the process whose
state is currently loaded in FPU
- the FPU exception is only turned on when scheduling a process which
is not the owner of FPU
- FPU state is restored for the process that generated the FPU
exception. This process runs immediately without letting scheduler
to pick a new process to resolve the FPU conflict asap, to minimize
the FPU thrashing and FPU exception hadler execution
- faster all non-FPU-exception kernel entries as FPU state is not
checked nor saved
- removed MF_USED_FPU flag, only MF_FPU_INITIALIZED remains to signal
that a process has used FPU in the past
There seems to have been a broken assumption in the fpu context
restoring code. It restores the context of the running process, without
guarantee that the current process is the one that will be scheduled.
This caused fpu saving for a different process to be triggered without
fpu hardware being enabled, causing an fpu exception in the kernel. This
practically only shows up with DEBUG_RACE on. Fix my thruby+me.
The fix
. is to only set the fpu-in-use-by-this-process flag in the
exception handler, and then take care of fpu restoring when
actually returning to userspace
And the patch
. translates fpu saving and restoring to c in arch_system.c,
getting rid of a juicy chunk of assembly
. makes osfxsr_feature private to arch_system.c
. removes most of the arch dependent code from do_sigsend
- substituted the use of the m_source message field by
caller->p_endpoint in kernel calls. It is the same information, just
passed more intuitively.
- the last dependency on m_type field is removed.
- do_unused() is substituted by a check for NULL.
- this pretty much removes the depency of kernel calls on the general
message format. In the future this may be used to pass the kcall
arguments in a different structure or registers (x86-64, ARM?) The
kcall number may be passed in a register already.
- removes dependency of do_safecopy() on the m_type field of the kcall
messages.
- instead of do_safecopy() figuring out what action is requested, the
correct safecopy method is called right away.
- Currently the cpu time quantum is timer-ticks based. Thus the
remaining quantum is decreased only if the processes is interrupted
by a timer tick. As processes block a lot this typically does not
happen for normal user processes. Also the quantum depends on the
frequency of the timer.
- This change makes the quantum miliseconds based. Internally the
miliseconds are translated into cpu cycles. Everytime userspace
execution is interrupted by kernel the cycles just consumed by the
current process are deducted from the remaining quantum.
- It makes the quantum system timer frequency independent.
- The boot processes quantum is loosely derived from the tick-based
quantas and 60Hz timer and subject to future change
- the 64bit arithmetics is a little ugly, will be changes once we have
compiler support for 64bit integers (soon)
-Makefile updates
-Update mkdep
-Build fixes/warning cleanups for some programs
-Restore leading underscores on global syms in kernel asm files
-Increase ramdisk size
ask to map in oxpcie i/o memory and support serial i/o for it in the
kernel. set oxpcie=<address> in boot monitor (retrieve address using
pci_debug=1 output). (no sanity checking is done on the address
currently.) disabled by default.
The change also contains some other minor cleanup (a new serial.h to set
register info common to UART and the OXPCIe card, in-kernel memory
mapping a little more structured and env_get() to get sysenv variables
without knowing about the params_buffer).
In this second phase, scheduling is moved from PM to its own
scheduler (see r6557 for phase one). In the next phase we hope to a)
include useful information in the "out of quantum" message and b)
create some simple scheduling policy that makes use of that
information.
When the system starts up, PM will iterate over its process table and
ask SCHED to take over scheduling unprivileged processes. This is
done by sending a SCHEDULING_START message to SCHED. This message
includes the processes endpoint, the parent's endpoint and its nice
level. The scheduler adds this process to its schedproc table, issues
a schedctl, and returns its own endpoint to PM - as the endpoint of
the effective scheduler. When a process terminates, a SCHEDULING_STOP
message is sent to the scheduler.
The reason for this effective endpoint is for future compatibility.
Some day, we may have a scheduler that, instead of scheduling the
process itself, forwards the SCHEDULING_START message on to another
scheduler.
PM has information on who schedules whom. As such, scheduling
messages from user-land are sent through PM. An example is when
processes change their priority, using nice(). In that case, a
getsetpriority message is sent to PM, which then sends a
SCHEDULING_SET_NICE to the process's effective scheduler.
When a process is forked through PM, it inherits its parent's
scheduler, but is spawned with an empty quantum. As before, a request
to fork a process flows through VM before returning to PM, which then
wakes up the child process. This flow has been modified slightly so
that PM notifies the scheduler of the new process, before waking up
the child process. If the scheduler fails to take over scheduling,
the child process is torn down and the fork fails with an erroneous
value.
Process priority is entirely decided upon using nice levels. PM
stores a copy of each process's nice level and when a child is
forked, its parent's nice level is sent in the SCHEDULING_START
message. How this level is mapped to a priority queue is up to the
scheduler. It should be noted that the nice level is used to
determine the max_priority and the parent could have been in a lower
priority when it was spawned. To prevent a CPU intensive process from
hawking the CPU by continuously forking children that get scheduled
in the max_priority, the scheduler should determine in which queue
the parent is currently scheduled, and schedule the child in that
same queue.
Other fixes: The USER_Q in kernel/proc.h was incorrectly defined as
NR_SCHED_QUEUES/2. That results in a "off by one" error when
converting priority->nice->priority for nice=0. This also had the
side effect that if someone were to set the MAX_USER_Q to something
else than 0, then USER_Q would be off.
- this patch only renames schedcheck() to switch_to_user(),
cycles_accounting_stop() to context_stop() and restart() to
+restore_user_context()
- the motivation is that since the introduction of schedcheck() it has
been abused for many things. It deserves a better name. It should
express the fact that from the moment we call the function we are in
the process of switching to user.
- cycles_accounting_stop() was originally a single purpose function.
As this function is called at were convenient places it is used in
for other things too, e.g. (un)locking the kernel. Thus it deserves
a better name too.
- using the old name, restart() does not call schedcheck(), however
calls to restart are replaced by calls to schedcheck()
[switch_to_user] and it calls restart() [restore_user_context]
it does this by
- making all processes interruptible by running out of quantum
- giving all processes a single tick of quantum
- picking a random runnable process instead of in order, and
from a single pool of runnable processes (no priorities)
This together with very high HZ values currently provokes some race conditions
seen earlier only when running with SMP.
- this patch substitutes *xpp for sender to increase readability of
mini_receive().
- makes sure that the dequeued sender has p_q_link == NULL and that
this condition holds when enqueuing the sender again.
- it is a sanity check to make sure that the new sender is not
enqueued already. Before this change the dequeued sender's p_q_link
may not be NULL and it was only set to NULL when enqueued again.
- deadlock() is more verbose in case of a detected deadlock. First, it
lists all processses in the deadlock group. Then it prints the proc
extra info, not only stack trace and register dump
- this patch moves the former printslot() from arch_system.c to
debug.c and reimplements it slightly. The output is not changed,
however, the process information is printed in a separate function
print_proc() in debug.c as such a function is also handy in other
situations and should be publicly available when debugging.
this patch changes the way pagefaults are delivered to VM. It adopts
the same model as the out-of-quantum messages sent by kernel to a
scheduler.
- everytime a userspace pagefault occurs, kernel creates a message
which is sent to VM on behalf of the faulting process
- the process is blocked on delivery to VM in the standard IPC code
instead of waiting in a spacial in-kernel queue (stack) and is not
runnable until VM tell kernel that the pagefault is resolved and is
free to clear the RTS_PAGEFAULT flag.
- VM does not need call kernel and poll the pagefault information
which saves many (1/2?) calls and kernel calls that return "no more
data"
- VM notification by kernel does not need to use signals
- each entry in proc table is by 12 bytes smaller (~3k save)
- This patch removes the time slice split between parent and child in
fork.
- The time slice of the parent remains unchanged and the child does
not have any.
- If the process has a scheduler, the scheduler must assign the
quantum and priority of the new process and let it run.
- If the child does not inherit a scheduler, it is scheduled by the
dummy default kernel policy. (servers, drivers, etc.)
- In theory, the scheduler can change the quantum even of the parent
process and implement any policy for splitting the quantum as
neither the parent nor the child are runnable. Sending the
out-of_quantum message on behalf of the processes may look like the
right solution, however, the scheduler would probably handle the
message before the whole fork protocol is finished. This way the
scheduler has absolute control when the process should become
runnable.
- this is a small addition to the userspace scheduling.
proc_kernel_scheduler() tests whether to use the default scheduling
policy in kernel. It is true if the process' scheduler is NULL _or_
self. Currently none of the tests was complete.
- it is not neccessary to test whether the scheduler is a system
process as the process already head permissions to make this call.
- it is better to test whether the scheduler has permission to make
changes to this process before testing whether the values are valid.
SYSLIB CHANGES:
- DS calls to publish / retrieve labels consider endpoints instead of u32_t.
VFS CHANGES:
- mapdriver() only adds an entry in the dmap table in VFS.
- dev_up() is only executed upon reception of a driver up event.
INET CHANGES:
- INET no longer searches for existing drivers instances at startup.
- A newtwork driver is (re)initialized upon reception of a driver up event.
- Networking startup is now race-free by design. No need to waste 5 seconds
at startup any more.
DRIVER CHANGES:
- Every driver publishes driver up events when starting for the first time or
in case of restart when recovery actions must be taken in the upper layers.
- Driver up events are published by drivers through DS.
- For regular drivers, VFS is normally the only subscriber, but not necessarily.
For instance, when the filter driver is in use, it must subscribe to driver
up events to initiate recovery.
- For network drivers, inet is the only subscriber for now.
- Every VFS driver is statically linked with libdriver, every network driver
is statically linked with libnetdriver.
DRIVER LIBRARIES CHANGES:
- Libdriver is extended to provide generic receive() and ds_publish() interfaces
for VFS drivers.
- driver_receive() is a wrapper for sef_receive() also used in driver_task()
to discard spurious messages that were meant to be delivered to a previous
version of the driver.
- driver_receive_mq() is the same as driver_receive() but integrates support
for queued messages.
- driver_announce() publishes a driver up event for VFS drivers and marks
the driver as initialized and expecting a DEV_OPEN message.
- Libnetdriver is introduced to provide similar receive() and ds_publish()
interfaces for network drivers (netdriver_announce() and netdriver_receive()).
- Network drivers all support live update with no state transfer now.
KERNEL CHANGES:
- Added kernel call statectl for state management. Used by driver_announce() to
unblock eventual callers sendrecing to the driver.
this patch does not add or change any functionality of do_ipc(), it
only makes things a little cleaner (hopefully).
Until now do_ipc() was responsible for handling all ipc calls. The
catch is that SENDA is fairly different which results in some ugly
code like this typecasting and variables naming which does not make
much sense for SENDA and makes the code hard to read.
result = mini_senda(caller_ptr, (asynmsg_t *)m_ptr, (size_t)src_dst_e);
As it is called directly from assembly, the new do_ipc() takes as
input values of 3 registers in reg_t variables (it used to be 4,
however, bit_map wasn't used so I removed it), does the checks common
to all ipc calls and call the appropriate handler either for
do_sync_ipc() (all except SENDA) or mini_senda() (for SENDA) while
typecasting the reg_t values correctly. As a result, handling SENDA
differences in do_sync_ipc() is no more needed. Also the code that
uses msg_size variable is improved a little bit.
arch_do_syscall() is simplified too.
- IPC_FLG_MSG_FROM_KERNEL status flag is returned to userspace if the
receive was satisfied by s message which was sent by the kernel on
behalf of a process. This perfectly reliale information.
- MF_SENDING_FROM_KERNEL flag added to processes to be able to set
IPC_FLG_MSG_FROM_KERNEL when finishing receive if the receiver
wasn't ready to receive immediately.
- PM is changed to use this information to confirm that the scheduling
messages are indeed from the kernel and not faked by a process.
PM uses sef_receive_status()
- get_work() is removed from PM to make the changes simpler
- there are cycles wasted in the IPC call due to a fairly compliacted
way of copying messages from userland to kernel. Sometimes this
complicated way (generic though) is used even for copying within the
kernel address space, sometimes it is used for copying in case _no_
copying is necessary. The goal of this patch is to improve this a
little bit.
- the places where a copy is from user to kernel use the
copy_msg_from_user() kernel-kernel copies are turned into
assignments and BuildNotifyMessage uses the delivery buffers to
avoid copying.
- copy_msg_from_user() was introduced when removing the system task
and is about 2/3 faster then using the current mechanism
(phys_copy). It also avoids the PHYS_COPY_CATCH macro. Assignment is
also faster and no copy is the fastest ;-) so perhaps there will be
some hardly noticable performance gain besides the clean up.
- cotributed by Bjorn Swift
- In this first phase, scheduling is moved from the kernel to the PM
server. The next steps are to a) moving scheduling to its own server
and b) include useful information in the "out of quantum" message,
so that the scheduler can make use of this information.
- The kernel process table now keeps record of who is responsible for
scheduling each process (p_scheduler). When this pointer is NULL,
the process will be scheduled by the kernel. If such a process runs
out of quantum, the kernel will simply renew its quantum an requeue
it.
- When PM loads, it will take over scheduling of all running
processes, except system processes, using sys_schedctl().
Essentially, this only results in taking over init. As children
inherit a scheduler from their parent, user space programs forked by
init will inherit PM (for now) as their scheduler.
- Once a process has been assigned a scheduler, and runs out of
quantum, its RTS_NO_QUANTUM flag will be set and the process
dequeued. The kernel will send a message to the scheduler, on the
process' behalf, informing the scheduler that it has run out of
quantum. The scheduler can take what ever action it pleases, based
on its policy, and then reschedule the process using the
sys_schedule() system call.
- Balance queues does not work as before. While the old in-kernel
function used to renew the quantum of processes in the highest
priority run queue, the user-space implementation only acts on
processes that have been bumped down to a lower priority queue.
This approach reacts slower to changes than the old one, but saves
us sending a sys_schedule message for each process every time we
balance the queues. Currently, when processes are moved up a
priority queue, their quantum is also renewed, but this can be
fiddled with.
- do_nice has been removed from kernel. PM answers to get- and
setpriority calls, updates it's own nice variable as well as the
max_run_queue. This will be refactored once scheduling is moved to a
separate server. We will probably have PM update it's local nice
value and then send a message to whoever is scheduling the process.
- changes to fix an issue in do_fork() where processes could run out
of quantum but bypassing the code path that handles it correctly.
The future plan is to remove the policy from do_fork() and implement
it in userspace too.
Currently a sequence of messages between a sender A and a receiver B of the
form: A.asynsend(M1, B); A.send(M2, B) may result in the receiver receiving
M1 first and then M2 or viceversa. This patch makes sure that the original
order M1, M2 is always preserved.
Note that the order of a hypotetical sequence A.asynsend(M1, B);
A.asynsend(M2, B) is already guaranteed by the implementation of
asynsend by design. Other senda-based wrappers can define their own
semantics.
- ack assumes that the direction flag in eflags is clear when
assigning two structures. It is implemented by a call to a built-in
function which is like memcpy but needs the flag to be clear
otherwise rubish is copied. This patch fixes the kernel entries.
- When the cpu halts, the interrupts are enable so the cpu may be
woken up. When the interrupt handler returns but another interrupt
is available it is also serviced immediately. This is not a problem
per-se. It only slightly breaks time accounting as idle accounted is
for the kernel time in the interrupt handler.
- As the big kernel lock is lock/unlocked in the smp branch in the
time acounting functions as they are called exactly at the places
we need to take the lock) this leads to a deadlock.
- we make sure that once the interrupt handler returns from the nested
trap, the interrupts are disabled. This means that only one
interrupt is serviced after idle is interrupted.
- this requires the loop in apic timer calibration to keep reenabling
the interrupts. I admit it is a little bit hackish (one line),
however, this code is a stupid corner case at the boot time.
Hopefully it does not matter too much.
IPC changes:
- receive() is changed to take an additional parameter, which is a pointer to
a status code.
- The status code is filled in by the kernel to provide additional information
to the caller. For now, the kernel only fills in the IPC call used by the
sender.
Syslib changes:
- sef_receive() has been split into sef_receive() (with the original semantics)
and sef_receive_status() which exposes the status code to userland.
- Ideally, every sys process should gradually switch to sef_receive_status()
and use is_ipc_notify() as a dependable way to check for notify.
- SEF has been modified to use is_ipc_notify() and demonstrate how to use the
new status code.
- before enabling paging VM asks kernel to resize its segments. This
may cause kernel to segfault if APIC is used and an interrupt
happens between this and paging enabled. As these are 2 separate
vmctl calls it is not atomic. This patch fixes this problem. VM does
not ask kernel to resize the segments in a separate call anymore.
The new segments limit is part of the "enable paging" call. It
generalizes this call in such a way that more information can be
passed as need be or the information may be completely different if
another architecture requires this.
- if an exception occurs in kernel and this exception is not handled
in an sane way and the kernel crashes, it also dumps what was loaded
in the general purpose registers exactly at the time of the
exception to help to debug the problem
the kernel. They are not used atm, but having them in trunk allows them
to be easily used when needed. To set a breakpoint that triggers when
the variable foo is written to (the most common use case), one calls:
breakpoint_set(vir2phys((vir_bytes) &foo), 0,
BREAKPOINT_FLAG_MODE_GLOBAL |
BREAKPOINT_FLAG_RW_WRITE |
BREAKPOINT_FLAG_LEN_4);
It can later be disabled using:
breakpoint_set(vir2phys((vir_bytes) &foo), 0,
BREAKPOINT_FLAG_MODE_OFF);
There are some limitations:
- There are at most four breakpoints (hardware limit); the index of the
breakpoint (0-3) is specified as the second parameter of
breakpoint_set.
- The breakpoint exception in the kernel is not handled and causes a
panic; it would be reasonably easy to change this by inspecing DR6,
printing a message, disabling the breakpoint and continuing. However,
in my experience even just a panic can be very useful.
- Breakpoints can be set only in the part of the address space that is
in every page table. It is useful for the kernel, but to use this for
user processes would require saving and restoring the debug registers
as part of the context switch. Although the CPU provides support for
local breakpoints (I implemened this as BREAKPOINT_FLAG_LOCAL) they
only work if task switching is used.
forget about the dirtypde bitmap and WIPEPDE/DONEPDE macros too.
check if mapping happens to already be in place, and if so, don't
reload cr3 (on the account of that mapping, that is).
don't reload cr3 unconditionally.
UPDATING INFO:
20100317:
/usr/src/etc/system.conf updated to ignore default kernel calls: copy
it (or merge it) to /etc/system.conf.
The hello driver (/dev/hello) added to the distribution:
# cd /usr/src/commands/scripts && make clean install
# cd /dev && MAKEDEV hello
KERNEL CHANGES:
- Generic signal handling support. The kernel no longer assumes PM as a signal
manager for every process. The signal manager of a given process can now be
specified in its privilege slot. When a signal has to be delivered, the kernel
performs the lookup and forwards the signal to the appropriate signal manager.
PM is the default signal manager for user processes, RS is the default signal
manager for system processes. To enable ptrace()ing for system processes, it
is sufficient to change the default signal manager to PM. This will temporarily
disable crash recovery, though.
- sys_exit() is now split into sys_exit() (i.e. exit() for system processes,
which generates a self-termination signal), and sys_clear() (i.e. used by PM
to ask the kernel to clear a process slot when a process exits).
- Added a new kernel call (i.e. sys_update()) to swap two process slots and
implement live update.
PM CHANGES:
- Posix signal handling is no longer allowed for system processes. System
signals are split into two fixed categories: termination and non-termination
signals. When a non-termination signaled is processed, PM transforms the signal
into an IPC message and delivers the message to the system process. When a
termination signal is processed, PM terminates the process.
- PM no longer assumes itself as the signal manager for system processes. It now
makes sure that every system signal goes through the kernel before being
actually processes. The kernel will then dispatch the signal to the appropriate
signal manager which may or may not be PM.
SYSLIB CHANGES:
- Simplified SEF init and LU callbacks.
- Added additional predefined SEF callbacks to debug crash recovery and
live update.
- Fixed a temporary ack in the SEF init protocol. SEF init reply is now
completely synchronous.
- Added SEF signal event type to provide a uniform interface for system
processes to deal with signals. A sef_cb_signal_handler() callback is
available for system processes to handle every received signal. A
sef_cb_signal_manager() callback is used by signal managers to process
system signals on behalf of the kernel.
- Fixed a few bugs with memory mapping and DS.
VM CHANGES:
- Page faults and memory requests coming from the kernel are now implemented
using signals.
- Added a new VM call to swap two process slots and implement live update.
- The call is used by RS at update time and in turn invokes the kernel call
sys_update().
RS CHANGES:
- RS has been reworked with a better functional decomposition.
- Better kernel call masks. com.h now defines the set of very basic kernel calls
every system service is allowed to use. This makes system.conf simpler and
easier to maintain. In addition, this guarantees a higher level of isolation
for system libraries that use one or more kernel calls internally (e.g. printf).
- RS is the default signal manager for system processes. By default, RS
intercepts every signal delivered to every system process. This makes crash
recovery possible before bringing PM and friends in the loop.
- RS now supports fast rollback when something goes wrong while initializing
the new version during a live update.
- Live update is now implemented by keeping the two versions side-by-side and
swapping the process slots when the old version is ready to update.
- Crash recovery is now implemented by keeping the two versions side-by-side
and cleaning up the old version only when the recovery process is complete.
DS CHANGES:
- Fixed a bug when the process doing ds_publish() or ds_delete() is not known
by DS.
- Fixed the completely broken support for strings. String publishing is now
implemented in the system library and simply wraps publishing of memory ranges.
Ideally, we should adopt a similar approach for other data types as well.
- Test suite fixed.
DRIVER CHANGES:
- The hello driver has been added to the Minix distribution to demonstrate basic
live update and crash recovery functionalities.
- Other drivers have been adapted to conform the new SEF interface.
swapcontext, and makecontext).
- Fix VM to not erroneously think the stack segment and data segment have
collided when a user-space thread invokes brk().
- Add test51 to test ucontext functionality.
- Add man pages for ucontext system calls.
Move archtypes.h to include/ dir, since several servers require it. Move
fpu.h and stackframe.h to arch-specific header directory. Make source
files and makefiles aware of the new header locations.
-Convert the include directory over to using bsdmake
syntax
-Update/add mkfiles
-Modify install(1) so that it can create symlinks
-Update makefiles to use new install(1) options
-Rename /usr/include/ibm to /usr/include/i386
-Create /usr/include/machine symlink to arch header files
-Move vm_i386.h to its new home in the /usr/include/i386
-Update source files to #include the header files at their
new homes.
-Add new gnu-includes target for building GCC headers
this change
- makes panic() variadic, doing full printf() formatting -
no more NO_NUM, and no more separate printf() statements
needed to print extra info (or something in hex) before panicing
- unifies panic() - same panic() name and usage for everyone -
vm, kernel and rest have different names/syntax currently
in order to implement their own luxuries, but no longer
- throws out the 1st argument, to make source less noisy.
the panic() in syslib retrieves the server name from the kernel
so it should be clear enough who is panicing; e.g.
panic("sigaction failed: %d", errno);
looks like:
at_wini(73130): panic: sigaction failed: 0
syslib:panic.c: stacktrace: 0x74dc 0x2025 0x100a
- throws out report() - printf() is more convenient and powerful
- harmonizes/fixes the use of panic() - there were a few places
that used printf-style formatting (didn't work) and newlines
(messes up the formatting) in panic()
- throws out a few per-server panic() functions
- cleans up a tie-in of tty with panic()
merging printf() and panic() statements to be done incrementally.
process waiting for" logic, which is duplicated a few times in the
kernel. (For a new feature for top.)
Introducing it and throwing out ESRCDIED and EDSTDIED (replaced by
EDEADSRCDST - so we don't have to care which part of the blocking is
failing in system.c) simplifies some code in the kernel and callers that
check for E{DEADSRCDST,ESRCDIED,EDSTDIED}, but don't care about the
difference, a fair bit, and more significantly doesn't duplicate the
'blocked-on' logic.