Before safecopies, the IO_ENDPT and DL_ENDPT message fields were needed
to know which actual process to copy data from/to, as that process may
not always be the caller. Now that we have full safecopy support, these
fields have become useless for that purpose: the owner of the grant is
*always* the caller. Allowing the caller to supply another endpoint is
in fact dangerous, because the callee may then end up using a grant
from a third party. One could call this a variant of the confused
deputy problem.
From now on, safecopy calls should always use the caller's endpoint as
grant owner. This fully obsoletes the DL_ENDPT field in the
inet/ethernet protocol. IO_ENDPT has other uses besides identifying the
grant owner though. This patch renames IO_ENDPT to USER_ENDPT, not only
because that is a more fitting name (it should never be used for I/O
after all), but also in order to intentionally break any old system
source code outside the base system. If this patch breaks your code,
fixing it is fairly simple:
- DL_ENDPT should be replaced with m_source;
- IO_ENDPT should be replaced with m_source when used for safecopies;
- IO_ENDPT should be replaced with USER_ENDPT for any other use, e.g.
when setting REP_ENDPT, matching requests in CANCEL calls, getting
DEV_SELECT flags, and retrieving of the real user process's endpoint
in DEV_OPEN.
The changes in this patch are binary backward compatible.
- kernel maintains a cpu_info array which contains various
information about each cpu as filled when each cpu boots
- the information contains idetification, features etc.
- flush TLB of processes only if the page tables has been changed and
the page tables of this process are already loaded on this cpu which
means that there might be stale entries in TLB. Until now SMP was
always flushing TLB to make sure everything is consistent.
- accidentaly this wasn't part of the SMP merge and the implementation
remained uncomplete with the timer keeping ticking periodically
- APIC timer is set for a signel shot and restarted everytime it
expires. This way we can keep the AP's trully idle
- the timer is restarted a little later before leaving to userspace
- LAPIC_TIMER_ICR is written before LAPIC_LVTTR so the newest value is
used
- fixed spurious and error interrupt handlers
- not to hog the system the warning isn't reported every time, just
once every 100 times, similarly for the spurious PIC interrupts
- a different set of MSRs and performance counters is used on AMD
- when initializing NMI watchdog the test for Intel architecture
performance counters feature only applies to Intel now
- NMI is enabled if the CPU belongs to a family which has the
performance counters that we use
- sometimes the system needs to know precisely on what type of cpu is
running. The cpu type id detected during arch specific
initialization and kept in the machine structure for later use.
- as a side-effect the information is exported to userland
- the Intel architecture cycle counter (performance counter) does not
count when the CPU is idle therefore we use busy loop instead of
halting the cpu when there is nothing to schedule
- the downside is that handling interrupts may be accounted as idle
time if a sample is taken before we get out of the nested trap and
pick a new process
- if profile --nmi kernel uses NMI watchdog based sampling based on
Intel architecture performance counters
- using NMI makes kernel profiling possible
- watchdog kernel lockup detection is disabled while sampling as we
may get unpredictable interrupts in kernel and thus possibly many
false positives
- if watchdog is not enabled at boot time, profiling enables it and
turns it of again when done
- contributed by Bjorn Swift
- adds process accounting, for example counting the number of messages
sent, how often the process was preemted and how much time it spent
in the run queue. These statistics, along with the current cpu load,
are sent back to the user-space scheduler in the Out Of Quantum
message.
- the user-space scheduler may choose to make use of these statistics
when making scheduling decisions. For isntance the cpu load becomes
especially useful when scheduling on multiple cores.
- when a process is migrated to a different CPU it may have an active
FPU context in the processor registers. We must save it and migrate
it together with the process.
- EBADCPU is returned is scheduler tries to run a process on a CPU
that either does not exist or isn't booted
- this change was originally meant to deal with stupid cpuid
instruction which provides totally useless information about
hyper-threading and MPS which does not deal with ht at all. ACPI
provides correct information. If ht is turned off it looks like some
CPUs failed to boot. Nevertheless this patch may be handy for
testing/benchmarking in the future.
- this makes sure that each process always run with updated TLB
- this is the simplest way how to achieve the consistency. As it means
significant performace degradation when not require, this is nto the
final solution and will be refined
- APIC timer always reprogrammed if expired
- timer tick never happens when in kernel => never immediate return
from userspace to kernel because of a buffered interrupt
- renamed argument to lapic_set_timer_one_shot()
- removed arch_ prefix from timer functions
- any cpu can use smp_schedule() to tell another cpu to reschedule
- if an AP is idle, it turns off timer as there is nothing to
preempt, no need to wakeup just to go back to sleep again
- if a cpu makes a process runnable on an idle cpu, it must wake it up
to reschedule
- pressing 'B' on the serial cnsole prints statistics for BKL per cpu.
- 'b' resets the counters
- it presents number of cycles each CPU spends in kernel, how many
cycyles it spends spinning while waiting for the BKL
- it shows optimistic estimation in how many cases we get the lock
immediately without spinning. As the test is not atomic the lock may
be already held by some other cpu before we actually try to acquire
it.
- cross-address space copies use these slots to map user memory for
kernel. This avoid any collisions between CPUs
- well, we only have a single CPU running at a time, this is just to
be safe for the future
- apic_send_ipi() to send inter-processor interrupts (IPIs)
- APIC IPI schedule and halt handlers to signal x-cpu that a cpu shold
reschedule or halt
- various little changes to let APs run
- no processes are scheduled at the APs and therefore they are idle
except being interrupted by a timer time to time
- tsc_ctr_switch is made cpu local
- although an x86 specific variable it must be declared globaly as the
cpulocal implementation does not allow otherwise
- each CPU has its own runqueues
- processes on BSP are put on the runqueues later after a switch to
the final stack when cpuid works to avoid special cases
- enqueue() and dequeue() use the run queues of the cpu the process is
assigned to
- pick_proc() uses the local run queues
- printing of per-CPU run queues ('2') on serial console
- APs configure local timers
- while configuring local APIC timer the CPUs fiddle with the interrupt
handlers. As the interrupt table is shared the BSP must not run
- APs wait until BSP turns paging on, it is not possible to safely
execute any code on APs until we can turn paging on as well as it
must be done synchronously everywhere
- APs turn paging on but do not continue and wait
- to isolate execution inside kernel we use a big kernel lock
implemented as a spinlock
- the lock is acquired asap after entering kernel mode and released as
late as possible. Only one CPU as a time can execute the core kernel
code
- measurement son real hw show that the overhead of this lock is close
to 0% of kernel time for the currnet system
- the overhead of this lock may be as high as 45% of kernel time in
virtual machines depending on the ratio between physical CPUs
available and emulated CPUs. The performance degradation is
significant
- kernel detects CPUs by searching ACPI tables for local apic nodes
- each CPU has its own TSS that points to its own stack. All cpus boot
on the same boot stack (in sequence) but switch to its private stack
as soon as they can.
- final booting code in main() placed in bsp_finish_booting() which is
executed only after the BSP switches to its final stack
- apic functions to send startup interrupts
- assembler functions to handle CPU features not needed for single cpu
mode like memory barries, HT detection etc.
- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
kernel/arch/i386/include/arch_smp.h
- 16-bit trampoline code for the APs. It is executed by each AP after
receiving startup IPIs it brings up the CPUs to 32bit mode and let
them spin in an infinite loop so they don't do any damage.
- implementation of kernel spinlock
- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
- most global variables carry information which is specific to the
local CPU and each CPU must have its own copy
- cpu local variable must be declared in cpulocal.h between
DECLARE_CPULOCAL_START and DECLARE_CPULOCAL_END markers using
DECLARE_CPULOCAL macro
- to access the cpu local data the provided macros must be used
get_cpu_var(cpu, name)
get_cpu_var_ptr(cpu, name)
get_cpulocal_var(name)
get_cpulocal_var_ptr(name)
- using this macros makes future changes in the implementation
possible
- switching to ELF will make the declaration of cpu local data much
simpler, e.g.
CPULOCAL int blah;
anywhere in the kernel source code
- kernel turns on IO APICs if no_apic is _not_ set or is equal 0
- pci driver must use the acpi driver to setup IRQ routing otherwise
the system cannot work correctly except systems like KVM that use
only legacy (E)ISA IRQs 0-15
- kernel exports DSDP (the root pointer where ACPI parsing starts) and
apic_enabled in the machine structure.
- ACPI driver uses DSDP to locate ACPI in memory. acpi_enabled tell
PCI driver to query ACPI for IRQ routing information.
- the ability for kernel to use ACPI tables to detect IO APICs. It is
the bare minimum the kernel needs to know about ACPI tables.
- it will be used to find out about processors as the MPS tables are
deprecated by ACPI and not all vendorsprovide them.
- removes p_delivermsg_lin item from the process structure and code
related to it
- as the send part, the receive does not need to use the
PHYS_COPY_CATCH() and umap_local() couple.
- The address space of the target process is installed before
delivermsg() is called.
- unlike the linear address, the virtual address does not change when
paging is turned on nor after fork().
- FPU context is stored only if conflict between 2 FPU users or while
exporting context of a process to userspace while it is the active
user of FPU
- FPU has its owner (fpu_owner) which points to the process whose
state is currently loaded in FPU
- the FPU exception is only turned on when scheduling a process which
is not the owner of FPU
- FPU state is restored for the process that generated the FPU
exception. This process runs immediately without letting scheduler
to pick a new process to resolve the FPU conflict asap, to minimize
the FPU thrashing and FPU exception hadler execution
- faster all non-FPU-exception kernel entries as FPU state is not
checked nor saved
- removed MF_USED_FPU flag, only MF_FPU_INITIALIZED remains to signal
that a process has used FPU in the past
There seems to have been a broken assumption in the fpu context
restoring code. It restores the context of the running process, without
guarantee that the current process is the one that will be scheduled.
This caused fpu saving for a different process to be triggered without
fpu hardware being enabled, causing an fpu exception in the kernel. This
practically only shows up with DEBUG_RACE on. Fix my thruby+me.
The fix
. is to only set the fpu-in-use-by-this-process flag in the
exception handler, and then take care of fpu restoring when
actually returning to userspace
And the patch
. translates fpu saving and restoring to c in arch_system.c,
getting rid of a juicy chunk of assembly
. makes osfxsr_feature private to arch_system.c
. removes most of the arch dependent code from do_sigsend
- Currently the cpu time quantum is timer-ticks based. Thus the
remaining quantum is decreased only if the processes is interrupted
by a timer tick. As processes block a lot this typically does not
happen for normal user processes. Also the quantum depends on the
frequency of the timer.
- This change makes the quantum miliseconds based. Internally the
miliseconds are translated into cpu cycles. Everytime userspace
execution is interrupted by kernel the cycles just consumed by the
current process are deducted from the remaining quantum.
- It makes the quantum system timer frequency independent.
- The boot processes quantum is loosely derived from the tick-based
quantas and 60Hz timer and subject to future change
- the 64bit arithmetics is a little ugly, will be changes once we have
compiler support for 64bit integers (soon)
-Makefile updates
-Update mkdep
-Build fixes/warning cleanups for some programs
-Restore leading underscores on global syms in kernel asm files
-Increase ramdisk size
ask to map in oxpcie i/o memory and support serial i/o for it in the
kernel. set oxpcie=<address> in boot monitor (retrieve address using
pci_debug=1 output). (no sanity checking is done on the address
currently.) disabled by default.
The change also contains some other minor cleanup (a new serial.h to set
register info common to UART and the OXPCIe card, in-kernel memory
mapping a little more structured and env_get() to get sysenv variables
without knowing about the params_buffer).
- this patch only renames schedcheck() to switch_to_user(),
cycles_accounting_stop() to context_stop() and restart() to
+restore_user_context()
- the motivation is that since the introduction of schedcheck() it has
been abused for many things. It deserves a better name. It should
express the fact that from the moment we call the function we are in
the process of switching to user.
- cycles_accounting_stop() was originally a single purpose function.
As this function is called at were convenient places it is used in
for other things too, e.g. (un)locking the kernel. Thus it deserves
a better name too.
- using the old name, restart() does not call schedcheck(), however
calls to restart are replaced by calls to schedcheck()
[switch_to_user] and it calls restart() [restore_user_context]