- as thre are still KERNEL and IDLE entries, time accounting for
kernel and idle time works the same as for any other process
- everytime we stop accounting for the currently running process,
kernel or idle, we read the TSC counter and increment the p_cycles
entry.
- the process cycles inherently include some of the kernel cycles as
we can stop accounting for the process only after we save its
context and we start accounting just before we restore its context
- this assumes that the system does not scale the CPU frequency which
will be true for ... long time ;-)
- there are no tasks running, we don't need TASK_PRIVILEGE priviledge anymore
- as there is no ring 1 anymore, there is no need for level0() to call sensitive
code from ring 1 in ring 0
- 286 related macros removed as clean up
* Userspace change to use the new kernel calls
- _taskcall(SYSTASK...) changed to _kernel_call(...)
- int 32 reused for the kernel calls
- _do_kernel_call() to make the trap to kernel
- kernel_call() to make the actuall kernel call from C using
_do_kernel_call()
- unlike ipc call the kernel call always succeeds as kernel is
always available, however, kernel may return an error
* Kernel side implementation of kernel calls
- the SYSTEm task does not run, only the proc table entry is
preserved
- every data_copy(SYSTEM is no data_copy(KERNEL
- "locking" is an empty operation now as everything runs in
kernel
- sys_task() is replaced by kernel_call() which copies the
message into kernel, dispatches the call to its handler and
finishes by either copying the results back to userspace (if
need be) or by suspending the process because of VM
- suspended processes are later made runnable once the memory
issue is resolved, picked up by the scheduler and only at
this time the call is resumed (in fact restarted) which does
not need to copy the message from userspace as the message
is already saved in the process structure.
- no ned for the vmrestart queue, the scheduler will restart
the system calls
- no special case in do_vmctl(), all requests remove the
RTS_VMREQUEST flag
- switch_address_space() implements a switch of the user address space
for the destination process
- this makes memory of this process easily accessible, e.g. a pointer
valid in the userspace can be used with a little complexity to
access the process's memory
- the switch does not happed only just before we return to userspace,
however, it happens right after we know which process we are going
to schedule. This happens before we start processing the misc flags
of this process so its memory is available
- if the process becomes not runnable while processing the mics flags
we pick a new process and we switch the address space again which
introduces possibly a little bit more overhead, however, it is
hopefully hidden by reducing the overheads when we actually access
the memory
- the syscalls are pretty much just ipc calls, however, sendrec() is
used to implement system task (sys) calls
- sendrec() won't be used anymore for this, therefore ipc calls will
become pure ipc calls
There is not that much use for it on a single CPU, however, deadlock
between kernel and system task can be delected. Or a runaway loop.
If a kernel gets locked up the timer interrupts don't occure (as all
interrupts are disabled in kernel mode). The only chance is to
interrupt the kernel by a non-maskable interrupt.
This patch generates NMIs using performance counters. It uses the most
widely available performace counters. As the performance counters are
highly model-specific this patch is not guaranteed to work on every
machine. Unfortunately this is also true for KVM :-/ On the other
hand adding this feature for other models is not extremely difficult
and the framework makes it hopefully easy enough.
Depending on the frequency of the CPU an NMI is generated at most
about every 0.5s If the cpu's speed is less then 2Ghz it is generated
at most every 1s. In general an NMI is generated much less often as
the performance counter counts down only if the cpu is not idle.
Therefore the overhead of this feature is fairly minimal even if the
load is high.
Uppon detecting that the kernel is locked up the kernel dumps the
state of the kernel registers and panics.
Local APIC must be enabled for the watchdog to work.
The code is _always_ compiled in, however, it is only enabled if
watchdog=<non-zero> is set in the boot monitor.
One corner case is serial console debugging. As dumping a lot of stuff
to the serial link may take a lot of time, the watchdog does not
detect lockups during this time!!! as it would result in too many
false positives. 10 nmi have to be handled before the lockup is
detected. This means something between ~5s to 10s.
Another corner case is that the watchdog is enabled only after the
paging is enabled as it would be pure madness to try to get it right.
- gas2ack cannot handle all variants of some instructions. Until this issues is
addressed, this patch places a big warning where appropriate. This code is not
supposed to change frequently.
- after a trap to kernel, the code automatically switches to kernel
stack, in the future local to the CPU
- k_reenter variable replaced by a test whether the CS is kernel cs or
not. The information is passed further if needed. Removes a global
variable which would need to be cpu local
- no need for global variables describing the exception or trap
context. This information is kept on stack and a pointer to this
structure is passed to the C code as a single structure
- removed loadedcr3 variable and its use replaced by reading the %cr3
register
- no need to redisable interrupts in restart() as they are already
disabled.
- unified handling of traps that push and don't push errorcode
- removed save() function as the process context is not saved directly
to process table but saved as required by the trap code. Essentially
it means that save() code is inlined everywhere not only in the
exception handling routine
- returning from syscall is more arch independent - it sets the retger
in C
- top of the x86 stack contains the current CPU id and pointer to the
currently scheduled process (the one right interrupted) so the mode
switch code can find where to save the context without need to use
proc_ptr which will be cpu local in the future and therefore
difficult to access in assembler and expensive to access in general
- some more clean up of level0 code. No need to read-back the argument
passed in
%eax from the proc structure. The mode switch code does not clobber
%the general registers and hence we can just call what is in %eax
- many assebly macros in sconst.h as they will be reused by the apic
assembly
- the PIC master and slave irq handlers don't pass the irq hook pointer but just
the irq number. It gives a little bit more information to the C handler as the
irq number is not lost
- the irq code path is more achitecture independent. i386 hw interrupts are
called irq and whereever the code is arch independent enough hw_intr_
functions are called to mask/unmask interrupts
- the legacy PIC is not the only possible interrupt controller in the x86 world,
therefore the intr_(un)mask functions were renamed to signal their
functionality explicitly. APIC will add their own.
- masking and unmasking PIC interrupt lines is removed from assembler and all
the functionality is rewriten in C and moved to i8259.c
- interrupt handlers have to unmask the interrupt line if all irq handlers are
done. Assembler does not do it anymore